Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 51
Filtrar
1.
Environ Health Perspect ; 132(8): 85002, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39106156

RESUMO

BACKGROUND: The field of toxicology has witnessed substantial advancements in recent years, particularly with the adoption of new approach methodologies (NAMs) to understand and predict chemical toxicity. Class-based methods such as clustering and classification are key to NAMs development and application, aiding the understanding of hazard and risk concerns associated with groups of chemicals without additional laboratory work. Advances in computational chemistry, data generation and availability, and machine learning algorithms represent important opportunities for continued improvement of these techniques to optimize their utility for specific regulatory and research purposes. However, due to their intricacy, deep understanding and careful selection are imperative to align the adequate methods with their intended applications. OBJECTIVES: This commentary aims to deepen the understanding of class-based approaches by elucidating the pivotal role of chemical similarity (structural and biological) in clustering and classification approaches (CCAs). It addresses the dichotomy between general end point-agnostic similarity, often entailing unsupervised analysis, and end point-specific similarity necessitating supervised learning. The goal is to highlight the nuances of these approaches, their applications, and common misuses. DISCUSSION: Understanding similarity is pivotal in toxicological research involving CCAs. The effectiveness of these approaches depends on the right definition and measure of similarity, which varies based on context and objectives of the study. This choice is influenced by how chemical structures are represented and the respective labels indicating biological activity, if applicable. The distinction between unsupervised clustering and supervised classification methods is vital, requiring the use of end point-agnostic vs. end point-specific similarity definition. Separate use or combination of these methods requires careful consideration to prevent bias and ensure relevance for the goal of the study. Unsupervised methods use end point-agnostic similarity measures to uncover general structural patterns and relationships, aiding hypothesis generation and facilitating exploration of datasets without the need for predefined labels or explicit guidance. Conversely, supervised techniques demand end point-specific similarity to group chemicals into predefined classes or to train classification models, allowing accurate predictions for new chemicals. Misuse can arise when unsupervised methods are applied to end point-specific contexts, like analog selection in read-across, leading to erroneous conclusions. This commentary provides insights into the significance of similarity and its role in supervised classification and unsupervised clustering approaches. https://doi.org/10.1289/EHP14001.


Assuntos
Aprendizado de Máquina , Análise por Conglomerados , Aprendizado de Máquina não Supervisionado , Toxicologia/métodos , Algoritmos
2.
J Cheminform ; 16(1): 101, 2024 Aug 16.
Artigo em Inglês | MEDLINE | ID: mdl-39152469

RESUMO

With the increased availability of chemical data in public databases, innovative techniques and algorithms have emerged for the analysis, exploration, visualization, and extraction of information from these data. One such technique is chemical grouping, where chemicals with common characteristics are categorized into distinct groups based on physicochemical properties, use, biological activity, or a combination. However, existing tools for chemical grouping often require specialized programming skills or the use of commercial software packages. To address these challenges, we developed a user-friendly chemical grouping workflow implemented in KNIME, a free, open-source, low/no-code, data analytics platform. The workflow serves as an all-encompassing tool, expertly incorporating a range of processes such as molecular descriptor calculation, feature selection, dimensionality reduction, hyperparameter search, and supervised and unsupervised machine learning methods, enabling effective chemical grouping and visualization of results. Furthermore, we implemented tools for interpretation, identifying key molecular descriptors for the chemical groups, and using natural language summaries to clarify the rationale behind these groupings. The workflow was designed to run seamlessly in both the KNIME local desktop version and KNIME Server WebPortal as a web application. It incorporates interactive interfaces and guides to assist users in a step-by-step manner. We demonstrate the utility of this workflow through a case study using an eye irritation and corrosion dataset.Scientific contributionsThis work presents a novel, comprehensive chemical grouping workflow in KNIME, enhancing accessibility by integrating a user-friendly graphical interface that eliminates the need for extensive programming skills. This workflow uniquely combines several features such as automated molecular descriptor calculation, feature selection, dimensionality reduction, and machine learning algorithms (both supervised and unsupervised), with hyperparameter optimization to refine chemical grouping accuracy. Moreover, we have introduced an innovative interpretative step and natural language summaries to elucidate the underlying reasons for chemical groupings, significantly advancing the usability of the tool and interpretability of the results.

3.
Vaccine X ; 19: 100503, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38868522

RESUMO

Scorpion envenoming (SE) is a public health problem in developing countries. In Algeria, the population exposed to the risk of SE was estimated at 86.45% in 2019. Thus, the development of a vaccine to protect the exposed population against scorpion toxins would be a major advance in the fight against this disease. This work aimed to evaluate the immunoprotective effect of a Multiple Antigenic Peptide against the Aah II toxin of Androctonus australis hector scorpion, the most dangerous scorpion species in Algeria. The immunogen MAP1Aah2 was designed and tested accordingly. This molecule contains a B epitope, derived from Aah II toxin, linked by a spacer to a universal T epitope, derived from the tetanus toxin. The results showed that MAP1Aah2 was non-toxic despite the fact that its sequence was derived from Aah II toxin. The immunoenzymatic assay revealed that the 3 immunization regimens tested generated specific anti-MAP1Aah2 antibodies and cross-reacted with the toxin. Mice immunized with this immunogen were partially protected against mortality caused by challenge doses of 2 and 3 LD50 of the toxin. The survival rate and developed symptoms varied depending on the adjuvant and the challenge dose used. In the in vitro neutralization test, the immune sera of mice having received the immunogen with incomplete Freund's adjuvant neutralized a challenge dose of 2 LD50. Hence, the concept of using peptide dendrimers, based on linear epitopes of scorpion toxins, as immunogens against the parent toxin was established. However, the protective properties of the tested immunogen require further optimizations.

4.
Regul Toxicol Pharmacol ; 149: 105614, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38574841

RESUMO

The United States Environmental Protection Agency (USEPA) uses the lethal dose 50% (LD50) value from in vivo rat acute oral toxicity studies for pesticide product label precautionary statements and environmental risk assessment (RA). The Collaborative Acute Toxicity Modeling Suite (CATMoS) is a quantitative structure-activity relationship (QSAR)-based in silico approach to predict rat acute oral toxicity that has the potential to reduce animal use when registering a new pesticide technical grade active ingredient (TGAI). This analysis compared LD50 values predicted by CATMoS to empirical values from in vivo studies for the TGAIs of 177 conventional pesticides. The accuracy and reliability of the model predictions were assessed relative to the empirical data in terms of USEPA acute oral toxicity categories and discrete LD50 values for each chemical. CATMoS was most reliable at placing pesticide TGAIs in acute toxicity categories III (>500-5000 mg/kg) and IV (>5000 mg/kg), with 88% categorical concordance for 165 chemicals with empirical in vivo LD50 values ≥ 500 mg/kg. When considering an LD50 for RA, CATMoS predictions of 2000 mg/kg and higher were found to agree with empirical values from limit tests (i.e., single, high-dose tests) or definitive results over 2000 mg/kg with few exceptions.


Assuntos
Simulação por Computador , Praguicidas , Relação Quantitativa Estrutura-Atividade , Testes de Toxicidade Aguda , United States Environmental Protection Agency , Animais , Medição de Risco , Praguicidas/toxicidade , Dose Letal Mediana , Ratos , Administração Oral , Testes de Toxicidade Aguda/métodos , Estados Unidos , Reprodutibilidade dos Testes
5.
J Cheminform ; 16(1): 19, 2024 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-38378618

RESUMO

The rapid increase of publicly available chemical structures and associated experimental data presents a valuable opportunity to build robust QSAR models for applications in different fields. However, the common concern is the quality of both the chemical structure information and associated experimental data. This is especially true when those data are collected from multiple sources as chemical substance mappings can contain many duplicate structures and molecular inconsistencies. Such issues can impact the resulting molecular descriptors and their mappings to experimental data and, subsequently, the quality of the derived models in terms of accuracy, repeatability, and reliability. Herein we describe the development of an automated workflow to standardize chemical structures according to a set of standard rules and generate two and/or three-dimensional "QSAR-ready" forms prior to the calculation of molecular descriptors. The workflow was designed in the KNIME workflow environment and consists of three high-level steps. First, a structure encoding is read, and then the resulting in-memory representation is cross-referenced with any existing identifiers for consistency. Finally, the structure is standardized using a series of operations including desalting, stripping of stereochemistry (for two-dimensional structures), standardization of tautomers and nitro groups, valence correction, neutralization when possible, and then removal of duplicates. This workflow was initially developed to support collaborative modeling QSAR projects to ensure consistency of the results from the different participants. It was then updated and generalized for other modeling applications. This included modification of the "QSAR-ready" workflow to generate "MS-ready structures" to support the generation of substance mappings and searches for software applications related to non-targeted analysis mass spectrometry. Both QSAR and MS-ready workflows are freely available in KNIME, via standalone versions on GitHub, and as docker container resources for the scientific community. Scientific contribution: This work pioneers an automated workflow in KNIME, systematically standardizing chemical structures to ensure their readiness for QSAR modeling and broader scientific applications. By addressing data quality concerns through desalting, stereochemistry stripping, and normalization, it optimizes molecular descriptors' accuracy and reliability. The freely available resources in KNIME, GitHub, and docker containers democratize access, benefiting collaborative research and advancing diverse modeling endeavors in chemistry and mass spectrometry.

6.
Front Pharmacol ; 13: 980747, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36278238

RESUMO

Current computational technologies hold promise for prioritizing the testing of the thousands of chemicals in commerce. Here, a case study is presented demonstrating comparative risk-prioritization approaches based on the ratio of surrogate hazard and exposure data, called margins of exposure (MoEs). Exposures were estimated using a U.S. EPA's ExpoCast predictive model (SEEM3) results and estimates of bioactivity were predicted using: 1) Oral equivalent doses (OEDs) derived from U.S. EPA's ToxCast high-throughput screening program, together with in vitro to in vivo extrapolation and 2) thresholds of toxicological concern (TTCs) determined using a structure-based decision-tree using the Toxtree open source software. To ground-truth these computational approaches, we compared the MoEs based on predicted noncancer TTC and OED values to those derived using the traditional method of deriving points of departure from no-observed adverse effect levels (NOAELs) from in vivo oral exposures in rodents. TTC-based MoEs were lower than NOAEL-based MoEs for 520 out of 522 (99.6%) compounds in this smaller overlapping dataset, but were relatively well correlated with the same (r 2 = 0.59). TTC-based MoEs were also lower than OED-based MoEs for 590 (83.2%) of the 709 evaluated chemicals, indicating that TTCs may serve as a conservative surrogate in the absence of chemical-specific experimental data. The TTC-based MoE prioritization process was then applied to over 45,000 curated environmental chemical structures as a proof-of-concept for high-throughput prioritization using TTC-based MoEs. This study demonstrates the utility of exploiting existing computational methods at the pre-assessment phase of a tiered risk-based approach to quickly, and conservatively, prioritize thousands of untested chemicals for further study.

7.
Birth Defects Res ; 114(16): 1037-1055, 2022 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-35532929

RESUMO

BACKGROUND: The developmental toxicity potential (dTP) concentration from the devTOX quickPredict (devTOXqP ) assay, a metabolomics-based human induced pluripotent stem cell assay, predicts a chemical's developmental toxicity potency. Here, in vitro to in vivo extrapolation (IVIVE) approaches were applied to address whether the devTOXqP assay could quantitatively predict in vivo developmental toxicity lowest effect levels (LELs) for the prototypical teratogen valproic acid (VPA) and a group of structural analogues. METHODS: VPA and a series of structural analogues were tested with the devTOXqP assay to determine dTP concentration and we estimated the equivalent administered doses (EADs) that would lead to plasma concentrations equivalent to the in vitro dTP concentrations. The EADs were compared to the LELs in rat developmental toxicity studies, human clinical doses, and EADs reported using other in vitro assays. To evaluate the impact of different pharmacokinetic (PK) models on IVIVE outcomes, we compared EADs predicted using various open-source and commercially available PK and physiologically based PK (PBPK) models. To evaluate the effect of in vitro kinetics, an equilibrium distribution model was applied to translate dTP concentrations to free medium concentrations before subsequent IVIVE analyses. RESULTS: The EAD estimates for the VPA analogues based on different PK/PBPK models were quantitatively similar to in vivo data from both rats and humans, where available, and the derived rank order of the chemicals was consistent with observed in vivo developmental toxicity. Different models were identified that provided accurate predictions for rat prenatal LELs and conservative estimates of human safe exposure. The impact of in vitro kinetics on EAD estimates is chemical-dependent. EADs from this study were within range of predicted doses from other in vitro and model organism data. CONCLUSIONS: This study highlights the importance of pharmacokinetic considerations when using in vitro assays and demonstrates the utility of the devTOXqP human stem cell-based platform to quantitatively assess a chemical's developmental toxicity potency.


Assuntos
Células-Tronco Pluripotentes Induzidas , Ácido Valproico , Animais , Feminino , Humanos , Gravidez , Ratos , Teratogênicos/toxicidade , Ácido Valproico/toxicidade
8.
Front Pharmacol ; 13: 864742, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35496281

RESUMO

Regulatory toxicology testing has traditionally relied on in vivo methods to inform decision-making. However, scientific, practical, and ethical considerations have led to an increased interest in the use of in vitro and in silico methods to fill data gaps. While in vitro experiments have the advantage of rapid application across large chemical sets, interpretation of data coming from these non-animal methods can be challenging due to the mechanistic nature of many assays. In vitro to in vivo extrapolation (IVIVE) has emerged as a computational tool to help facilitate this task. Specifically, IVIVE uses physiologically based pharmacokinetic (PBPK) models to estimate tissue-level chemical concentrations based on various dosing parameters. This approach is used to estimate the administered dose needed to achieve in vitro bioactivity concentrations within the body. IVIVE results can be useful to inform on metrics such as margin of exposure or to prioritize potential chemicals of concern, but the PBPK models used in this approach have extensive data requirements. Thus, access to input parameters, as well as the technical requirements of applying and interpreting models, has limited the use of IVIVE as a routine part of in vitro testing. As interest in using non-animal methods for regulatory and research contexts continues to grow, our perspective is that access to computational support tools for PBPK modeling and IVIVE will be essential for facilitating broader application and acceptance of these techniques, as well as for encouraging the most scientifically sound interpretation of in vitro results. We highlight recent developments in two open-access computational support tools for PBPK modeling and IVIVE accessible via the Integrated Chemical Environment (https://ice.ntp.niehs.nih.gov/), demonstrate the types of insights these tools can provide, and discuss how these analyses may inform in vitro-based decision making.

9.
Toxicol Sci ; 188(1): 34-47, 2022 06 28.
Artigo em Inglês | MEDLINE | ID: mdl-35426934

RESUMO

Regulatory agencies rely upon rodent in vivo acute oral toxicity data to determine hazard categorization, require appropriate precautionary labeling, and perform quantitative risk assessments. As the field of toxicology moves toward animal-free new approach methodologies (NAMs), there is a pressing need to develop a reliable, robust reference data set to characterize the reproducibility and inherent variability in the in vivo acute oral toxicity test method, which would serve to contextualize results and set expectations regarding NAM performance. Such a data set is also needed for training and evaluating computational models. To meet these needs, rat acute oral LD50 data from multiple databases were compiled, curated, and analyzed to characterize variability and reproducibility of results across a set of up to 2441 chemicals with multiple independent study records. Conditional probability analyses reveal that replicate studies only result in the same hazard categorization on average at 60% likelihood. Although we did not have sufficient study metadata to evaluate the impact of specific protocol components (eg, strain, age, or sex of rat, feed used, treatment vehicle, etc.), studies were assumed to follow standard test guidelines. We investigated, but could not attribute, various chemical properties as the sources of variability (ie, chemical structure, physiochemical properties, functional use). Thus, we conclude that inherent biological or protocol variability likely underlies the variance in the results. Based on the observed variability, we were able to quantify a margin of uncertainty of ±0.24 log10 (mg/kg) associated with discrete in vivo rat acute oral LD50 values.


Assuntos
Reprodutibilidade dos Testes , Animais , Bases de Dados Factuais , Probabilidade , Ratos , Medição de Risco/métodos , Testes de Toxicidade Aguda/métodos
12.
Environ Health Perspect ; 129(4): 47013, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33929906

RESUMO

BACKGROUND: Humans are exposed to tens of thousands of chemical substances that need to be assessed for their potential toxicity. Acute systemic toxicity testing serves as the basis for regulatory hazard classification, labeling, and risk management. However, it is cost- and time-prohibitive to evaluate all new and existing chemicals using traditional rodent acute toxicity tests. In silico models built using existing data facilitate rapid acute toxicity predictions without using animals. OBJECTIVES: The U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) Acute Toxicity Workgroup organized an international collaboration to develop in silico models for predicting acute oral toxicity based on five different end points: Lethal Dose 50 (LD50 value, U.S. Environmental Protection Agency hazard (four) categories, Globally Harmonized System for Classification and Labeling hazard (five) categories, very toxic chemicals [LD50 (LD50≤50mg/kg)], and nontoxic chemicals (LD50>2,000mg/kg). METHODS: An acute oral toxicity data inventory for 11,992 chemicals was compiled, split into training and evaluation sets, and made available to 35 participating international research groups that submitted a total of 139 predictive models. Predictions that fell within the applicability domains of the submitted models were evaluated using external validation sets. These were then combined into consensus models to leverage strengths of individual approaches. RESULTS: The resulting consensus predictions, which leverage the collective strengths of each individual model, form the Collaborative Acute Toxicity Modeling Suite (CATMoS). CATMoS demonstrated high performance in terms of accuracy and robustness when compared with in vivo results. DISCUSSION: CATMoS is being evaluated by regulatory agencies for its utility and applicability as a potential replacement for in vivo rat acute oral toxicity studies. CATMoS predictions for more than 800,000 chemicals have been made available via the National Toxicology Program's Integrated Chemical Environment tools and data sets (ice.ntp.niehs.nih.gov). The models are also implemented in a free, standalone, open-source tool, OPERA, which allows predictions of new and untested chemicals to be made. https://doi.org/10.1289/EHP8495.


Assuntos
Órgãos Governamentais , Animais , Simulação por Computador , Ratos , Testes de Toxicidade Aguda , Estados Unidos , United States Environmental Protection Agency
13.
ALTEX ; 38(2): 327-335, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33511999

RESUMO

Efforts are underway to develop and implement nonanimal approaches which can characterize acute systemic lethality. A workshop was held in October 2019 to discuss developments in the prediction of acute oral lethality for chemicals and mixtures, as well as progress and needs in the understanding and modeling of mechanisms of acute lethality. During the workshop, each speaker led the group through a series of charge questions to determine clear next steps to progress the aims of the workshop. Participants concluded that a variety of approaches will be needed and should be applied in a tiered fashion. Non-testing approaches, including waiving tests, computational models for single chemicals, and calculating the acute lethality of mixtures based on the LD50 values of mixture components, could be used for some assessments now, especially in the very toxic or non-toxic classification ranges. Agencies can develop policies indicating contexts under which mathematical approaches for mixtures assessment are acceptable; to expand applicability, poorly predicted mixtures should be examined to understand discrepancies and adapt the approach. Transparency and an understanding of the variability of in vivo approaches are crucial to facilitate regulatory application of new approaches. In a replacement strategy, mechanistically based in vitro or in silico models will be needed to support non-testing approaches especially for highly acutely toxic chemicals. The workshop discussed approaches that can be used in the immediate or near term for some applications and identified remaining actions needed to implement approaches to fully replace the use of animals for acute systemic toxicity testing.


Assuntos
Testes de Toxicidade Aguda , Animais , Simulação por Computador , Humanos
14.
Regul Toxicol Pharmacol ; 117: 104764, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-32798611

RESUMO

Screening certain environmental chemicals for their ability to interact with endocrine targets, including the androgen receptor (AR), is an important global concern. We previously developed a model using a battery of eleven in vitro AR assays to predict in vivo AR activity. Here we describe a revised mathematical modeling approach that also incorporates data from newly available assays and demonstrate that subsets of assays can provide close to the same level of predictivity. These subset models are evaluated against the full model using 1820 chemicals, as well as in vitro and in vivo reference chemicals from the literature. Agonist batteries of as few as six assays and antagonist batteries of as few as five assays can yield balanced accuracies of 95% or better relative to the full model. Balanced accuracy for predicting reference chemicals is 100%. An approach is outlined for researchers to develop their own subset batteries to accurately detect AR activity using assays that map to the pathway of key molecular and cellular events involved in chemical-mediated AR activation and transcriptional activity. This work indicates in vitro bioactivity and in silico predictions that map to the AR pathway could be used in an integrated approach to testing and assessment for identifying chemicals that interact directly with the mammalian AR.


Assuntos
Antagonistas de Receptores de Andrógenos/toxicidade , Androgênios/toxicidade , Substâncias Perigosas/toxicidade , Modelos Teóricos , Receptores Androgênicos , Antagonistas de Receptores de Andrógenos/metabolismo , Androgênios/metabolismo , Animais , Exposição Ambiental/prevenção & controle , Exposição Ambiental/estatística & dados numéricos , Substâncias Perigosas/metabolismo , Ensaios de Triagem em Larga Escala/métodos , Humanos , Receptores Androgênicos/metabolismo
15.
Toxicol In Vitro ; 67: 104916, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32553663

RESUMO

Moving toward species-relevant chemical safety assessments and away from animal testing requires access to reliable data to develop and build confidence in new approaches. The Integrated Chemical Environment (ICE) provides tools and curated data centered around chemical safety assessment. This article describes updates to ICE, including improved accessibility and interpretability of in vitro data via mechanistic target mapping and enhanced interactive tools for in vitro to in vivo extrapolation (IVIVE). Mapping of in vitro assay targets to toxicity endpoints of regulatory importance uses literature-based mode-of-action information and controlled terminology from existing knowledge organization systems to support data interoperability with external resources. The most recent ICE update includes Tox21 high-throughput screening data curated using analytical chemistry data and assay-specific parameters to eliminate potential artifacts or unreliable activity. Also included are physicochemical/ADME parameters for over 800,000 chemicals predicted by quantitative structure-activity relationship models. These parameters are used by the new ICE IVIVE tool in combination with the U.S. Environmental Protection Agency's httk R package to estimate in vivo exposures corresponding to in vitro bioactivity concentrations from stored or user-defined assay data. These new ICE features allow users to explore the applications of an expanded data space and facilitate building confidence in non-animal approaches.


Assuntos
Segurança Química , Medição de Risco , Alternativas aos Testes com Animais , Animais , Bases de Dados Factuais , Ensaios de Triagem em Larga Escala , Humanos , Testes de Toxicidade
16.
Nucleic Acids Res ; 48(W1): W586-W590, 2020 07 02.
Artigo em Inglês | MEDLINE | ID: mdl-32421835

RESUMO

High-throughput screening (HTS) research programs for drug development or chemical hazard assessment are designed to screen thousands of molecules across hundreds of biological targets or pathways. Most HTS platforms use fluorescence and luminescence technologies, representing more than 70% of the assays in the US Tox21 research consortium. These technologies are subject to interferent signals largely explained by chemicals interacting with light spectrum. This phenomenon results in up to 5-10% of false positive results, depending on the chemical library used. Here, we present the InterPred webserver (version 1.0), a platform to predict such interference chemicals based on the first large-scale chemical screening effort to directly characterize chemical-assay interference, using assays in the Tox21 portfolio specifically designed to measure autofluorescence and luciferase inhibition. InterPred combines 17 quantitative structure activity relationship (QSAR) models built using optimized machine learning techniques and allows users to predict the probability that a new chemical will interfere with different combinations of cellular and technology conditions. InterPred models have been applied to the entire Distributed Structure-Searchable Toxicity (DSSTox) Database (∼800,000 chemicals). The InterPred webserver is available at https://sandbox.ntp.niehs.nih.gov/interferences/.


Assuntos
Ensaios de Triagem em Larga Escala , Software , Artefatos , Fluorescência , Internet , Aprendizado de Máquina , Preparações Farmacêuticas/química , Relação Quantitativa Estrutura-Atividade , Fluxo de Trabalho
17.
Sci Rep ; 10(1): 3986, 2020 03 04.
Artigo em Inglês | MEDLINE | ID: mdl-32132587

RESUMO

The U.S. federal consortium on toxicology in the 21st century (Tox21) produces quantitative, high-throughput screening (HTS) data on thousands of chemicals across a wide range of assays covering critical biological targets and cellular pathways. Many of these assays, and those used in other in vitro screening programs, rely on luciferase and fluorescence-based readouts that can be susceptible to signal interference by certain chemical structures resulting in false positive outcomes. Included in the Tox21 portfolio are assays specifically designed to measure interference in the form of luciferase inhibition and autofluorescence via multiple wavelengths (red, blue, and green) and under various conditions (cell-free and cell-based, two cell types). Out of 8,305 chemicals tested in the Tox21 interference assays, percent actives ranged from 0.5% (red autofluorescence) to 9.9% (luciferase inhibition). Self-organizing maps and hierarchical clustering were used to relate chemical structural clusters to interference activity profiles. Multiple machine learning algorithms were applied to predict assay interference based on molecular descriptors and chemical properties. The best performing predictive models (accuracies of ~80%) have been included in a web-based tool called InterPred that will allow users to predict the likelihood of assay interference for any new chemical structure and thus increase confidence in HTS data by decreasing false positive testing results.


Assuntos
Bases de Dados de Compostos Químicos , Ensaios de Triagem em Larga Escala , Testes de Toxicidade , Análise por Conglomerados , Internet , Relação Quantitativa Estrutura-Atividade
18.
Environ Health Perspect ; 128(2): 27002, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-32074470

RESUMO

BACKGROUND: Endocrine disrupting chemicals (EDCs) are xenobiotics that mimic the interaction of natural hormones and alter synthesis, transport, or metabolic pathways. The prospect of EDCs causing adverse health effects in humans and wildlife has led to the development of scientific and regulatory approaches for evaluating bioactivity. This need is being addressed using high-throughput screening (HTS) in vitro approaches and computational modeling. OBJECTIVES: In support of the Endocrine Disruptor Screening Program, the U.S. Environmental Protection Agency (EPA) led two worldwide consortiums to virtually screen chemicals for their potential estrogenic and androgenic activities. Here, we describe the Collaborative Modeling Project for Androgen Receptor Activity (CoMPARA) efforts, which follows the steps of the Collaborative Estrogen Receptor Activity Prediction Project (CERAPP). METHODS: The CoMPARA list of screened chemicals built on CERAPP's list of 32,464 chemicals to include additional chemicals of interest, as well as simulated ToxCast™ metabolites, totaling 55,450 chemical structures. Computational toxicology scientists from 25 international groups contributed 91 predictive models for binding, agonist, and antagonist activity predictions. Models were underpinned by a common training set of 1,746 chemicals compiled from a combined data set of 11 ToxCast™/Tox21 HTS in vitro assays. RESULTS: The resulting models were evaluated using curated literature data extracted from different sources. To overcome the limitations of single-model approaches, CoMPARA predictions were combined into consensus models that provided averaged predictive accuracy of approximately 80% for the evaluation set. DISCUSSION: The strengths and limitations of the consensus predictions were discussed with example chemicals; then, the models were implemented into the free and open-source OPERA application to enable screening of new chemicals with a defined applicability domain and accuracy assessment. This implementation was used to screen the entire EPA DSSTox database of ∼875,000 chemicals, and their predicted AR activities have been made available on the EPA CompTox Chemicals dashboard and National Toxicology Program's Integrated Chemical Environment. https://doi.org/10.1289/EHP5580.


Assuntos
Simulação por Computador , Disruptores Endócrinos , Androgênios , Bases de Dados Factuais , Ensaios de Triagem em Larga Escala , Humanos , Receptores Androgênicos , Estados Unidos , United States Environmental Protection Agency
19.
Toxicol Appl Pharmacol ; 387: 114774, 2020 01 15.
Artigo em Inglês | MEDLINE | ID: mdl-31783037

RESUMO

Chemical risk assessment relies on toxicity tests that require significant numbers of animals, time and costs. For the >30,000 chemicals in commerce, the current scale of animal testing is insufficient to address chemical safety concerns as regulatory and product stewardship considerations evolve to require more comprehensive understanding of potential biological effects, conditions of use, and associated exposures. We demonstrate the use of a multi-level new approach methodology (NAMs) strategy for hazard- and risk-based prioritization to reduce animal testing. A Level 1/2 chemical prioritization based on estrogen receptor (ER) activity and metabolic activation using ToxCast data was used to select 112 chemicals for testing in a Level 3 human uterine cell estrogen response assay (IKA assay). The Level 3 data were coupled with quantitative in vitro to in vivo extrapolation (Q-IVIVE) to support bioactivity determination (as a surrogate for hazard) in a tissue-specific context. Assay AC50s and Q-IVIVE were used to estimate human equivalent doses (HEDs), and HEDs were compared to rodent uterotrophic assay in vivo-derived points of departure (PODs). For substances active both in vitro and in vivo, IKA assay-derived HEDs were lower or equivalent to in vivo PODs for 19/23 compounds (83%). Activity exposure relationships were calculated, and the IKA assay was as or more protective of human health than the rodent uterotrophic assay for all IKA-positive compounds. This study demonstrates the utility of biologically relevant fit-for-purpose assays and supports the use of a multi-level strategy for chemical risk assessment.


Assuntos
Alternativas ao Uso de Animais/métodos , Disruptores Endócrinos/toxicidade , Ensaios de Triagem em Larga Escala/métodos , Testes de Toxicidade/métodos , Útero/efeitos dos fármacos , Animais , Bioensaio/métodos , Técnicas de Cultura de Células , Linhagem Celular Tumoral , Proliferação de Células/efeitos dos fármacos , Simulação por Computador , Estudos de Viabilidade , Feminino , Humanos , Modelos Biológicos , Ratos , Medição de Risco/métodos , Útero/citologia
20.
Toxicol In Vitro ; 58: 1-12, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-30807807

RESUMO

Because of their broad biological coverage and increasing affordability transcriptomic technologies have increased our ability to evaluate cellular response to chemical stressors, providing a potential means of evaluating chemical response while decreasing dependence on apical endpoints derived from traditional long-term animal studies. It has recently been suggested that dose-response modeling of transcriptomic data may be incorporated into risk assessment frameworks as a means of approximating chemical hazard. However, identification of mode of action from transcriptomics lacks a similar systematic framework. To this end, we developed a web-based interactive browser-MoAviz-that allows visualization of perturbed pathways. We populated this browser with expression data from a large public toxicogenomic database (TG-GATEs). We evaluated the extent to which gene expression changes from in-life exposures could be associated with mode of action by developing a novel similarity index-the Modified Jaccard Index (MJI)-that provides a quantitative description of genomic pathway similarity (rather than gene level comparison). While typical compound-compound similarity is low (median MJI = 0.026), clustering of the TG-GATES compounds identifies groups of similar chemistries. Some clusters aggregated compounds with known similar modes of action, including PPARa agonists (median MJI = 0.315) and NSAIDs (median MJI = 0.322). Analysis of paired in vitro (hepatocyte)-in vivo (liver) experiments revealed systematic patterns in the responses of model systems to chemical stress. Accounting for these model-specific, but chemical-independent, differences improved pathway concordance by 36% between in vivo and in vitro models.


Assuntos
Perfilação da Expressão Gênica , Animais , Bases de Dados Factuais , Ontologia Genética , Hepatócitos/metabolismo , Humanos , Medição de Risco , Transcriptoma
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA