Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 49
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Regul Toxicol Pharmacol ; 149: 105614, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38574841

RESUMEN

The United States Environmental Protection Agency (USEPA) uses the lethal dose 50% (LD50) value from in vivo rat acute oral toxicity studies for pesticide product label precautionary statements and environmental risk assessment (RA). The Collaborative Acute Toxicity Modeling Suite (CATMoS) is a quantitative structure-activity relationship (QSAR)-based in silico approach to predict rat acute oral toxicity that has the potential to reduce animal use when registering a new pesticide technical grade active ingredient (TGAI). This analysis compared LD50 values predicted by CATMoS to empirical values from in vivo studies for the TGAIs of 177 conventional pesticides. The accuracy and reliability of the model predictions were assessed relative to the empirical data in terms of USEPA acute oral toxicity categories and discrete LD50 values for each chemical. CATMoS was most reliable at placing pesticide TGAIs in acute toxicity categories III (>500-5000 mg/kg) and IV (>5000 mg/kg), with 88% categorical concordance for 165 chemicals with empirical in vivo LD50 values ≥ 500 mg/kg. When considering an LD50 for RA, CATMoS predictions of 2000 mg/kg and higher were found to agree with empirical values from limit tests (i.e., single, high-dose tests) or definitive results over 2000 mg/kg with few exceptions.


Asunto(s)
Simulación por Computador , Plaguicidas , Relación Estructura-Actividad Cuantitativa , Pruebas de Toxicidad Aguda , United States Environmental Protection Agency , Animales , Medición de Riesgo , Plaguicidas/toxicidad , Dosificación Letal Mediana , Ratas , Administración Oral , Pruebas de Toxicidad Aguda/métodos , Estados Unidos , Reproducibilidad de los Resultados
2.
Nucleic Acids Res ; 48(W1): W586-W590, 2020 07 02.
Artículo en Inglés | MEDLINE | ID: mdl-32421835

RESUMEN

High-throughput screening (HTS) research programs for drug development or chemical hazard assessment are designed to screen thousands of molecules across hundreds of biological targets or pathways. Most HTS platforms use fluorescence and luminescence technologies, representing more than 70% of the assays in the US Tox21 research consortium. These technologies are subject to interferent signals largely explained by chemicals interacting with light spectrum. This phenomenon results in up to 5-10% of false positive results, depending on the chemical library used. Here, we present the InterPred webserver (version 1.0), a platform to predict such interference chemicals based on the first large-scale chemical screening effort to directly characterize chemical-assay interference, using assays in the Tox21 portfolio specifically designed to measure autofluorescence and luciferase inhibition. InterPred combines 17 quantitative structure activity relationship (QSAR) models built using optimized machine learning techniques and allows users to predict the probability that a new chemical will interfere with different combinations of cellular and technology conditions. InterPred models have been applied to the entire Distributed Structure-Searchable Toxicity (DSSTox) Database (∼800,000 chemicals). The InterPred webserver is available at https://sandbox.ntp.niehs.nih.gov/interferences/.


Asunto(s)
Ensayos Analíticos de Alto Rendimiento , Programas Informáticos , Artefactos , Fluorescencia , Internet , Aprendizaje Automático , Preparaciones Farmacéuticas/química , Relación Estructura-Actividad Cuantitativa , Flujo de Trabajo
3.
Toxicol Appl Pharmacol ; 387: 114774, 2020 01 15.
Artículo en Inglés | MEDLINE | ID: mdl-31783037

RESUMEN

Chemical risk assessment relies on toxicity tests that require significant numbers of animals, time and costs. For the >30,000 chemicals in commerce, the current scale of animal testing is insufficient to address chemical safety concerns as regulatory and product stewardship considerations evolve to require more comprehensive understanding of potential biological effects, conditions of use, and associated exposures. We demonstrate the use of a multi-level new approach methodology (NAMs) strategy for hazard- and risk-based prioritization to reduce animal testing. A Level 1/2 chemical prioritization based on estrogen receptor (ER) activity and metabolic activation using ToxCast data was used to select 112 chemicals for testing in a Level 3 human uterine cell estrogen response assay (IKA assay). The Level 3 data were coupled with quantitative in vitro to in vivo extrapolation (Q-IVIVE) to support bioactivity determination (as a surrogate for hazard) in a tissue-specific context. Assay AC50s and Q-IVIVE were used to estimate human equivalent doses (HEDs), and HEDs were compared to rodent uterotrophic assay in vivo-derived points of departure (PODs). For substances active both in vitro and in vivo, IKA assay-derived HEDs were lower or equivalent to in vivo PODs for 19/23 compounds (83%). Activity exposure relationships were calculated, and the IKA assay was as or more protective of human health than the rodent uterotrophic assay for all IKA-positive compounds. This study demonstrates the utility of biologically relevant fit-for-purpose assays and supports the use of a multi-level strategy for chemical risk assessment.


Asunto(s)
Alternativas al Uso de Animales/métodos , Disruptores Endocrinos/toxicidad , Ensayos Analíticos de Alto Rendimiento/métodos , Pruebas de Toxicidad/métodos , Útero/efectos de los fármacos , Animales , Bioensayo/métodos , Técnicas de Cultivo de Célula , Línea Celular Tumoral , Proliferación Celular/efectos de los fármacos , Simulación por Computador , Estudios de Factibilidad , Femenino , Humanos , Modelos Biológicos , Ratas , Medición de Riesgo/métodos , Útero/citología
4.
Regul Toxicol Pharmacol ; 117: 104764, 2020 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-32798611

RESUMEN

Screening certain environmental chemicals for their ability to interact with endocrine targets, including the androgen receptor (AR), is an important global concern. We previously developed a model using a battery of eleven in vitro AR assays to predict in vivo AR activity. Here we describe a revised mathematical modeling approach that also incorporates data from newly available assays and demonstrate that subsets of assays can provide close to the same level of predictivity. These subset models are evaluated against the full model using 1820 chemicals, as well as in vitro and in vivo reference chemicals from the literature. Agonist batteries of as few as six assays and antagonist batteries of as few as five assays can yield balanced accuracies of 95% or better relative to the full model. Balanced accuracy for predicting reference chemicals is 100%. An approach is outlined for researchers to develop their own subset batteries to accurately detect AR activity using assays that map to the pathway of key molecular and cellular events involved in chemical-mediated AR activation and transcriptional activity. This work indicates in vitro bioactivity and in silico predictions that map to the AR pathway could be used in an integrated approach to testing and assessment for identifying chemicals that interact directly with the mammalian AR.


Asunto(s)
Antagonistas de Receptores Androgénicos/toxicidad , Andrógenos/toxicidad , Sustancias Peligrosas/toxicidad , Modelos Teóricos , Receptores Androgénicos , Antagonistas de Receptores Androgénicos/metabolismo , Andrógenos/metabolismo , Animales , Exposición a Riesgos Ambientales/prevención & control , Exposición a Riesgos Ambientales/estadística & datos numéricos , Sustancias Peligrosas/metabolismo , Ensayos Analíticos de Alto Rendimiento/métodos , Humanos , Receptores Androgénicos/metabolismo
5.
Anal Bioanal Chem ; 411(4): 853-866, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-30519961

RESUMEN

In August 2015, the US Environmental Protection Agency (EPA) convened a workshop entitled "Advancing non-targeted analyses of xenobiotic chemicals in environmental and biological media." The purpose of the workshop was to bring together the foremost experts in non-targeted analysis (NTA) to discuss the state-of-the-science for generating, interpreting, and exchanging NTA measurement data. During the workshop, participants discussed potential designs for a collaborative project that would use EPA resources, including the ToxCast library of chemical substances, the DSSTox database, and the CompTox Chemicals Dashboard, to evaluate cutting-edge NTA methods. That discussion was the genesis of EPA's Non-Targeted Analysis Collaborative Trial (ENTACT). Nearly 30 laboratories have enrolled in ENTACT and used a variety of chromatography, mass spectrometry, and data processing approaches to characterize ten synthetic chemical mixtures, three standardized media (human serum, house dust, and silicone band) extracts, and thousands of individual substances. Initial results show that nearly all participants have detected and reported more compounds in the mixtures than were intentionally added, with large inter-lab variability in the number of reported compounds. A comparison of gas and liquid chromatography results shows that the majority (45.3%) of correctly identified compounds were detected by only one method and 15.4% of compounds were not identified. Finally, a limited set of true positive identifications indicates substantial differences in observable chemical space when employing disparate separation and ionization techniques as part of NTA workflows. This article describes the genesis of ENTACT, all study methods and materials, and an analysis of results submitted to date. Graphical abstract ᅟ.


Asunto(s)
Conducta Cooperativa , Contaminantes Ambientales/análisis , Proyectos de Investigación , Xenobióticos/análisis , Cromatografía/métodos , Mezclas Complejas , Recolección de Datos , Polvo , Educación , Exposición a Riesgos Ambientales , Contaminantes Ambientales/normas , Contaminantes Ambientales/toxicidad , Humanos , Laboratorios/organización & administración , Espectrometría de Masas/métodos , Control de Calidad , Estándares de Referencia , Suero , Siliconas/química , Estados Unidos , United States Environmental Protection Agency , Xenobióticos/normas , Xenobióticos/toxicidad
6.
Arch Toxicol ; 92(2): 587-600, 2018 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-29075892

RESUMEN

In an effort to address a major challenge in chemical safety assessment, alternative approaches for characterizing systemic effect levels, a predictive model was developed. Systemic effect levels were curated from ToxRefDB, HESS-DB and COSMOS-DB from numerous study types totaling 4379 in vivo studies for 1247 chemicals. Observed systemic effects in mammalian models are a complex function of chemical dynamics, kinetics, and inter- and intra-individual variability. To address this complex problem, systemic effect levels were modeled at the study-level by leveraging study covariates (e.g., study type, strain, administration route) in addition to multiple descriptor sets, including chemical (ToxPrint, PaDEL, and Physchem), biological (ToxCast), and kinetic descriptors. Using random forest modeling with cross-validation and external validation procedures, study-level covariates alone accounted for approximately 15% of the variance reducing the root mean squared error (RMSE) from 0.96 log10 to 0.85 log10 mg/kg/day, providing a baseline performance metric (lower expectation of model performance). A consensus model developed using a combination of study-level covariates, chemical, biological, and kinetic descriptors explained a total of 43% of the variance with an RMSE of 0.69 log10 mg/kg/day. A benchmark model (upper expectation of model performance) was also developed with an RMSE of 0.5 log10 mg/kg/day by incorporating study-level covariates and the mean effect level per chemical. To achieve a representative chemical-level prediction, the minimum study-level predicted and observed effect level per chemical were compared reducing the RMSE from 1.0 to 0.73 log10 mg/kg/day, equivalent to 87% of predictions falling within an order-of-magnitude of the observed value. Although biological descriptors did not improve model performance, the final model was enriched for biological descriptors that indicated xenobiotic metabolism gene expression, oxidative stress, and cytotoxicity, demonstrating the importance of accounting for kinetics and non-specific bioactivity in predicting systemic effect levels. Herein, we generated an externally predictive model of systemic effect levels for use as a safety assessment tool and have generated forward predictions for over 30,000 chemicals.


Asunto(s)
Modelos Químicos , Pruebas de Toxicidad , Animales , Cosméticos/toxicidad , Bases de Datos de Compuestos Químicos , Modelos Estadísticos , Toxicocinética
7.
J Chem Inf Model ; 57(11): 2874-2884, 2017 11 27.
Artículo en Inglés | MEDLINE | ID: mdl-29022712

RESUMEN

We present a practical and easy-to-run in silico workflow exploiting a structure-based strategy making use of docking simulations to derive highly predictive classification models of the androgenic potential of chemicals. Models were trained on a high-quality chemical collection comprising 1689 curated compounds made available within the CoMPARA consortium from the US Environmental Protection Agency and were integrated with a two-step applicability domain whose implementation had the effect of improving both the confidence in prediction and statistics by reducing the number of false negatives. Among the nine androgen receptor X-ray solved structures, the crystal 2PNU (entry code from the Protein Data Bank) was associated with the best performing structure-based classification model. Three validation sets comprising each 2590 compounds extracted by the DUD-E collection were used to challenge model performance and the effectiveness of Applicability Domain implementation. Next, the 2PNU model was applied to screen and prioritize two collections of chemicals. The first is a small pool of 12 representative androgenic compounds that were accurately classified based on outstanding rationale at the molecular level. The second is a large external blind set of 55450 chemicals with potential for human exposure. We show how the use of molecular docking provides highly interpretable models and can represent a real-life option as an alternative nontesting method for predictive toxicology.


Asunto(s)
Andrógenos/toxicidad , Simulación del Acoplamiento Molecular , Andrógenos/química , Andrógenos/metabolismo , Simulación por Computador , Conformación Proteica , Relación Estructura-Actividad Cuantitativa , Receptores Androgénicos/química , Receptores Androgénicos/metabolismo
8.
J Chem Inf Model ; 57(1): 36-49, 2017 01 23.
Artículo en Inglés | MEDLINE | ID: mdl-28006899

RESUMEN

There are little available toxicity data on the vast majority of chemicals in commerce. High-throughput screening (HTS) studies, such as those being carried out by the U.S. Environmental Protection Agency (EPA) ToxCast program in partnership with the federal Tox21 research program, can generate biological data to inform models for predicting potential toxicity. However, physicochemical properties are also needed to model environmental fate and transport, as well as exposure potential. The purpose of the present study was to generate an open-source quantitative structure-property relationship (QSPR) workflow to predict a variety of physicochemical properties that would have cross-platform compatibility to integrate into existing cheminformatics workflows. In this effort, decades-old experimental property data sets available within the EPA EPI Suite were reanalyzed using modern cheminformatics workflows to develop updated QSPR models capable of supplying computationally efficient, open, and transparent HTS property predictions in support of environmental modeling efforts. Models were built using updated EPI Suite data sets for the prediction of six physicochemical properties: octanol-water partition coefficient (logP), water solubility (logS), boiling point (BP), melting point (MP), vapor pressure (logVP), and bioconcentration factor (logBCF). The coefficient of determination (R2) between the estimated values and experimental data for the six predicted properties ranged from 0.826 (MP) to 0.965 (BP), with model performance for five of the six properties exceeding those from the original EPI Suite models. The newly derived models can be employed for rapid estimation of physicochemical properties within an open-source HTS workflow to inform fate and toxicity prediction models of environmental chemicals.


Asunto(s)
Fenómenos Químicos , Simulación por Computador , Contaminantes Ambientales/química , Aprendizaje Automático , Contaminantes Ambientales/toxicidad , Informática , Relación Estructura-Actividad Cuantitativa , Solubilidad , Temperatura de Transición , Presión de Vapor , Agua/química
9.
Chem Res Toxicol ; 29(9): 1410-27, 2016 09 19.
Artículo en Inglés | MEDLINE | ID: mdl-27509301

RESUMEN

The US Environmental Protection Agency's (EPA) Endocrine Disruptor Screening Program (EDSP) is using in vitro data generated from ToxCast/Tox21 high-throughput screening assays to assess the endocrine activity of environmental chemicals. Considering that in vitro assays may have limited metabolic capacity, inactive chemicals that are biotransformed into metabolites with endocrine bioactivity may be missed for further screening and testing. Therefore, there is a value in developing novel approaches to account for metabolism and endocrine activity of both parent chemicals and their associated metabolites. We used commercially available software to predict metabolites of 50 parent compounds, out of which 38 chemicals are known to have estrogenic metabolites, and 12 compounds and their metabolites are negative for estrogenic activity. Three ER QSAR models were used to determine potential estrogen bioactivity of the parent compounds and predicted metabolites, the outputs of the models were averaged, and the chemicals were then ranked based on the total estrogenicity of the parent chemical and metabolites. The metabolite prediction software correctly identified known estrogenic metabolites for 26 out of 27 parent chemicals with associated metabolite data, and 39 out of 46 estrogenic metabolites were predicted as potential biotransformation products derived from the parent chemical. The QSAR models estimated stronger estrogenic activity for the majority of the known estrogenic metabolites compared to their parent chemicals. Finally, the three models identified a similar set of parent compounds as top ranked chemicals based on the estrogenicity of putative metabolites. This proposed in silico approach is an inexpensive and rapid strategy for the detection of chemicals with estrogenic metabolites and may reduce potential false negative results from in vitro assays.


Asunto(s)
Simulación por Computador , Disruptores Endocrinos/toxicidad , Contaminantes Ambientales/toxicidad , Estrógenos/química , Bases de Datos como Asunto , Disruptores Endocrinos/química , Disruptores Endocrinos/metabolismo , Contaminantes Ambientales/metabolismo , Predicción , Humanos , Relación Estructura-Actividad Cuantitativa , Estados Unidos , United States Environmental Protection Agency
10.
Chem Res Toxicol ; 29(8): 1225-51, 2016 08 15.
Artículo en Inglés | MEDLINE | ID: mdl-27367298

RESUMEN

The U.S. Environmental Protection Agency's (EPA) ToxCast program is testing a large library of Agency-relevant chemicals using in vitro high-throughput screening (HTS) approaches to support the development of improved toxicity prediction models. Launched in 2007, Phase I of the program screened 310 chemicals, mostly pesticides, across hundreds of ToxCast assay end points. In Phase II, the ToxCast library was expanded to 1878 chemicals, culminating in the public release of screening data at the end of 2013. Subsequent expansion in Phase III has resulted in more than 3800 chemicals actively undergoing ToxCast screening, 96% of which are also being screened in the multi-Agency Tox21 project. The chemical library unpinning these efforts plays a central role in defining the scope and potential application of ToxCast HTS results. The history of the phased construction of EPA's ToxCast library is reviewed, followed by a survey of the library contents from several different vantage points. CAS Registry Numbers are used to assess ToxCast library coverage of important toxicity, regulatory, and exposure inventories. Structure-based representations of ToxCast chemicals are then used to compute physicochemical properties, substructural features, and structural alerts for toxicity and biotransformation. Cheminformatics approaches using these varied representations are applied to defining the boundaries of HTS testability, evaluating chemical diversity, and comparing the ToxCast library to potential target application inventories, such as used in EPA's Endocrine Disruption Screening Program (EDSP). Through several examples, the ToxCast chemical library is demonstrated to provide comprehensive coverage of the knowledge domains and target inventories of potential interest to EPA. Furthermore, the varied representations and approaches presented here define local chemistry domains potentially worthy of further investigation (e.g., not currently covered in the testing library or defined by toxicity "alerts") to strategically support data mining and predictive toxicology modeling moving forward.


Asunto(s)
Toxicología
11.
Chem Res Toxicol ; 28(4): 738-51, 2015 Apr 20.
Artículo en Inglés | MEDLINE | ID: mdl-25697799

RESUMEN

The U.S. Tox21 and EPA ToxCast program screen thousands of environmental chemicals for bioactivity using hundreds of high-throughput in vitro assays to build predictive models of toxicity. We represented chemicals based on bioactivity and chemical structure descriptors, then used supervised machine learning to predict in vivo hepatotoxic effects. A set of 677 chemicals was represented by 711 in vitro bioactivity descriptors (from ToxCast assays), 4,376 chemical structure descriptors (from QikProp, OpenBabel, PaDEL, and PubChem), and three hepatotoxicity categories (from animal studies). Hepatotoxicants were defined by rat liver histopathology observed after chronic chemical testing and grouped into hypertrophy (161), injury (101) and proliferative lesions (99). Classifiers were built using six machine learning algorithms: linear discriminant analysis (LDA), Naïve Bayes (NB), support vector machines (SVM), classification and regression trees (CART), k-nearest neighbors (KNN), and an ensemble of these classifiers (ENSMB). Classifiers of hepatotoxicity were built using chemical structure descriptors, ToxCast bioactivity descriptors, and hybrid descriptors. Predictive performance was evaluated using 10-fold cross-validation testing and in-loop, filter-based, feature subset selection. Hybrid classifiers had the best balanced accuracy for predicting hypertrophy (0.84 ± 0.08), injury (0.80 ± 0.09), and proliferative lesions (0.80 ± 0.10). Though chemical and bioactivity classifiers had a similar balanced accuracy, the former were more sensitive, and the latter were more specific. CART, ENSMB, and SVM classifiers performed the best, and nuclear receptor activation and mitochondrial functions were frequently found in highly predictive classifiers of hepatotoxicity. ToxCast and ToxRefDB provide the largest and richest publicly available data sets for mining linkages between the in vitro bioactivity of environmental chemicals and their adverse histopathological outcomes. Our findings demonstrate the utility of high-throughput assays for characterizing rodent hepatotoxicants, the benefit of using hybrid representations that integrate bioactivity and chemical structure, and the need for objective evaluation of classification performance.


Asunto(s)
Hígado/efectos de los fármacos , Pruebas de Toxicidad , Animales , Técnicas In Vitro , Estructura Molecular , Ratas
12.
J Cheminform ; 16(1): 19, 2024 Feb 20.
Artículo en Inglés | MEDLINE | ID: mdl-38378618

RESUMEN

The rapid increase of publicly available chemical structures and associated experimental data presents a valuable opportunity to build robust QSAR models for applications in different fields. However, the common concern is the quality of both the chemical structure information and associated experimental data. This is especially true when those data are collected from multiple sources as chemical substance mappings can contain many duplicate structures and molecular inconsistencies. Such issues can impact the resulting molecular descriptors and their mappings to experimental data and, subsequently, the quality of the derived models in terms of accuracy, repeatability, and reliability. Herein we describe the development of an automated workflow to standardize chemical structures according to a set of standard rules and generate two and/or three-dimensional "QSAR-ready" forms prior to the calculation of molecular descriptors. The workflow was designed in the KNIME workflow environment and consists of three high-level steps. First, a structure encoding is read, and then the resulting in-memory representation is cross-referenced with any existing identifiers for consistency. Finally, the structure is standardized using a series of operations including desalting, stripping of stereochemistry (for two-dimensional structures), standardization of tautomers and nitro groups, valence correction, neutralization when possible, and then removal of duplicates. This workflow was initially developed to support collaborative modeling QSAR projects to ensure consistency of the results from the different participants. It was then updated and generalized for other modeling applications. This included modification of the "QSAR-ready" workflow to generate "MS-ready structures" to support the generation of substance mappings and searches for software applications related to non-targeted analysis mass spectrometry. Both QSAR and MS-ready workflows are freely available in KNIME, via standalone versions on GitHub, and as docker container resources for the scientific community. Scientific contribution: This work pioneers an automated workflow in KNIME, systematically standardizing chemical structures to ensure their readiness for QSAR modeling and broader scientific applications. By addressing data quality concerns through desalting, stereochemistry stripping, and normalization, it optimizes molecular descriptors' accuracy and reliability. The freely available resources in KNIME, GitHub, and docker containers democratize access, benefiting collaborative research and advancing diverse modeling endeavors in chemistry and mass spectrometry.

13.
Vaccine X ; 19: 100503, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38868522

RESUMEN

Scorpion envenoming (SE) is a public health problem in developing countries. In Algeria, the population exposed to the risk of SE was estimated at 86.45% in 2019. Thus, the development of a vaccine to protect the exposed population against scorpion toxins would be a major advance in the fight against this disease. This work aimed to evaluate the immunoprotective effect of a Multiple Antigenic Peptide against the Aah II toxin of Androctonus australis hector scorpion, the most dangerous scorpion species in Algeria. The immunogen MAP1Aah2 was designed and tested accordingly. This molecule contains a B epitope, derived from Aah II toxin, linked by a spacer to a universal T epitope, derived from the tetanus toxin. The results showed that MAP1Aah2 was non-toxic despite the fact that its sequence was derived from Aah II toxin. The immunoenzymatic assay revealed that the 3 immunization regimens tested generated specific anti-MAP1Aah2 antibodies and cross-reacted with the toxin. Mice immunized with this immunogen were partially protected against mortality caused by challenge doses of 2 and 3 LD50 of the toxin. The survival rate and developed symptoms varied depending on the adjuvant and the challenge dose used. In the in vitro neutralization test, the immune sera of mice having received the immunogen with incomplete Freund's adjuvant neutralized a challenge dose of 2 LD50. Hence, the concept of using peptide dendrimers, based on linear epitopes of scorpion toxins, as immunogens against the parent toxin was established. However, the protective properties of the tested immunogen require further optimizations.

14.
J Chem Inf Model ; 53(4): 867-78, 2013 Apr 22.
Artículo en Inglés | MEDLINE | ID: mdl-23469921

RESUMEN

The European REACH regulation requires information on ready biodegradation, which is a screening test to assess the biodegradability of chemicals. At the same time REACH encourages the use of alternatives to animal testing which includes predictions from quantitative structure-activity relationship (QSAR) models. The aim of this study was to build QSAR models to predict ready biodegradation of chemicals by using different modeling methods and types of molecular descriptors. Particular attention was given to data screening and validation procedures in order to build predictive models. Experimental values of 1055 chemicals were collected from the webpage of the National Institute of Technology and Evaluation of Japan (NITE): 837 and 218 molecules were used for calibration and testing purposes, respectively. In addition, models were further evaluated using an external validation set consisting of 670 molecules. Classification models were produced in order to discriminate biodegradable and nonbiodegradable chemicals by means of different mathematical methods: k nearest neighbors, partial least squares discriminant analysis, and support vector machines, as well as their consensus models. The proposed models and the derived consensus analysis demonstrated good classification performances with respect to already published QSAR models on biodegradation. Relationships between the molecular descriptors selected in each QSAR model and biodegradability were evaluated.


Asunto(s)
Modelos Estadísticos , Bibliotecas de Moléculas Pequeñas/metabolismo , Biodegradación Ambiental , Bases de Datos de Compuestos Químicos , Estructura Molecular , Relación Estructura-Actividad Cuantitativa , Bibliotecas de Moléculas Pequeñas/química , Bibliotecas de Moléculas Pequeñas/clasificación
15.
Molecules ; 17(5): 4791-810, 2012 Apr 25.
Artículo en Inglés | MEDLINE | ID: mdl-22534664

RESUMEN

One of the OECD principles for model validation requires defining the Applicability Domain (AD) for the QSAR models. This is important since the reliable predictions are generally limited to query chemicals structurally similar to the training compounds used to build the model. Therefore, characterization of interpolation space is significant in defining the AD and in this study some existing descriptor-based approaches performing this task are discussed and compared by implementing them on existing validated datasets from the literature. Algorithms adopted by different approaches allow defining the interpolation space in several ways, while defined thresholds contribute significantly to the extrapolations. For each dataset and approach implemented for this study, the comparison analysis was carried out by considering the model statistics and relative position of test set with respect to the training space.


Asunto(s)
Modelos Estadísticos , Relación Estructura-Actividad Cuantitativa , Algoritmos , Modelos Químicos
16.
Front Pharmacol ; 13: 864742, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35496281

RESUMEN

Regulatory toxicology testing has traditionally relied on in vivo methods to inform decision-making. However, scientific, practical, and ethical considerations have led to an increased interest in the use of in vitro and in silico methods to fill data gaps. While in vitro experiments have the advantage of rapid application across large chemical sets, interpretation of data coming from these non-animal methods can be challenging due to the mechanistic nature of many assays. In vitro to in vivo extrapolation (IVIVE) has emerged as a computational tool to help facilitate this task. Specifically, IVIVE uses physiologically based pharmacokinetic (PBPK) models to estimate tissue-level chemical concentrations based on various dosing parameters. This approach is used to estimate the administered dose needed to achieve in vitro bioactivity concentrations within the body. IVIVE results can be useful to inform on metrics such as margin of exposure or to prioritize potential chemicals of concern, but the PBPK models used in this approach have extensive data requirements. Thus, access to input parameters, as well as the technical requirements of applying and interpreting models, has limited the use of IVIVE as a routine part of in vitro testing. As interest in using non-animal methods for regulatory and research contexts continues to grow, our perspective is that access to computational support tools for PBPK modeling and IVIVE will be essential for facilitating broader application and acceptance of these techniques, as well as for encouraging the most scientifically sound interpretation of in vitro results. We highlight recent developments in two open-access computational support tools for PBPK modeling and IVIVE accessible via the Integrated Chemical Environment (https://ice.ntp.niehs.nih.gov/), demonstrate the types of insights these tools can provide, and discuss how these analyses may inform in vitro-based decision making.

17.
Toxicol Sci ; 188(1): 34-47, 2022 06 28.
Artículo en Inglés | MEDLINE | ID: mdl-35426934

RESUMEN

Regulatory agencies rely upon rodent in vivo acute oral toxicity data to determine hazard categorization, require appropriate precautionary labeling, and perform quantitative risk assessments. As the field of toxicology moves toward animal-free new approach methodologies (NAMs), there is a pressing need to develop a reliable, robust reference data set to characterize the reproducibility and inherent variability in the in vivo acute oral toxicity test method, which would serve to contextualize results and set expectations regarding NAM performance. Such a data set is also needed for training and evaluating computational models. To meet these needs, rat acute oral LD50 data from multiple databases were compiled, curated, and analyzed to characterize variability and reproducibility of results across a set of up to 2441 chemicals with multiple independent study records. Conditional probability analyses reveal that replicate studies only result in the same hazard categorization on average at 60% likelihood. Although we did not have sufficient study metadata to evaluate the impact of specific protocol components (eg, strain, age, or sex of rat, feed used, treatment vehicle, etc.), studies were assumed to follow standard test guidelines. We investigated, but could not attribute, various chemical properties as the sources of variability (ie, chemical structure, physiochemical properties, functional use). Thus, we conclude that inherent biological or protocol variability likely underlies the variance in the results. Based on the observed variability, we were able to quantify a margin of uncertainty of ±0.24 log10 (mg/kg) associated with discrete in vivo rat acute oral LD50 values.


Asunto(s)
Reproducibilidad de los Resultados , Animales , Bases de Datos Factuales , Probabilidad , Ratas , Medición de Riesgo/métodos , Pruebas de Toxicidad Aguda/métodos
18.
Birth Defects Res ; 114(16): 1037-1055, 2022 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-35532929

RESUMEN

BACKGROUND: The developmental toxicity potential (dTP) concentration from the devTOX quickPredict (devTOXqP ) assay, a metabolomics-based human induced pluripotent stem cell assay, predicts a chemical's developmental toxicity potency. Here, in vitro to in vivo extrapolation (IVIVE) approaches were applied to address whether the devTOXqP assay could quantitatively predict in vivo developmental toxicity lowest effect levels (LELs) for the prototypical teratogen valproic acid (VPA) and a group of structural analogues. METHODS: VPA and a series of structural analogues were tested with the devTOXqP assay to determine dTP concentration and we estimated the equivalent administered doses (EADs) that would lead to plasma concentrations equivalent to the in vitro dTP concentrations. The EADs were compared to the LELs in rat developmental toxicity studies, human clinical doses, and EADs reported using other in vitro assays. To evaluate the impact of different pharmacokinetic (PK) models on IVIVE outcomes, we compared EADs predicted using various open-source and commercially available PK and physiologically based PK (PBPK) models. To evaluate the effect of in vitro kinetics, an equilibrium distribution model was applied to translate dTP concentrations to free medium concentrations before subsequent IVIVE analyses. RESULTS: The EAD estimates for the VPA analogues based on different PK/PBPK models were quantitatively similar to in vivo data from both rats and humans, where available, and the derived rank order of the chemicals was consistent with observed in vivo developmental toxicity. Different models were identified that provided accurate predictions for rat prenatal LELs and conservative estimates of human safe exposure. The impact of in vitro kinetics on EAD estimates is chemical-dependent. EADs from this study were within range of predicted doses from other in vitro and model organism data. CONCLUSIONS: This study highlights the importance of pharmacokinetic considerations when using in vitro assays and demonstrates the utility of the devTOXqP human stem cell-based platform to quantitatively assess a chemical's developmental toxicity potency.


Asunto(s)
Células Madre Pluripotentes Inducidas , Ácido Valproico , Animales , Femenino , Humanos , Embarazo , Ratas , Teratógenos/toxicidad , Ácido Valproico/toxicidad
19.
Front Pharmacol ; 13: 980747, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36278238

RESUMEN

Current computational technologies hold promise for prioritizing the testing of the thousands of chemicals in commerce. Here, a case study is presented demonstrating comparative risk-prioritization approaches based on the ratio of surrogate hazard and exposure data, called margins of exposure (MoEs). Exposures were estimated using a U.S. EPA's ExpoCast predictive model (SEEM3) results and estimates of bioactivity were predicted using: 1) Oral equivalent doses (OEDs) derived from U.S. EPA's ToxCast high-throughput screening program, together with in vitro to in vivo extrapolation and 2) thresholds of toxicological concern (TTCs) determined using a structure-based decision-tree using the Toxtree open source software. To ground-truth these computational approaches, we compared the MoEs based on predicted noncancer TTC and OED values to those derived using the traditional method of deriving points of departure from no-observed adverse effect levels (NOAELs) from in vivo oral exposures in rodents. TTC-based MoEs were lower than NOAEL-based MoEs for 520 out of 522 (99.6%) compounds in this smaller overlapping dataset, but were relatively well correlated with the same (r 2 = 0.59). TTC-based MoEs were also lower than OED-based MoEs for 590 (83.2%) of the 709 evaluated chemicals, indicating that TTCs may serve as a conservative surrogate in the absence of chemical-specific experimental data. The TTC-based MoE prioritization process was then applied to over 45,000 curated environmental chemical structures as a proof-of-concept for high-throughput prioritization using TTC-based MoEs. This study demonstrates the utility of exploiting existing computational methods at the pre-assessment phase of a tiered risk-based approach to quickly, and conservatively, prioritize thousands of untested chemicals for further study.

20.
ALTEX ; 38(2): 327-335, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33511999

RESUMEN

Efforts are underway to develop and implement nonanimal approaches which can characterize acute systemic lethality. A workshop was held in October 2019 to discuss developments in the prediction of acute oral lethality for chemicals and mixtures, as well as progress and needs in the understanding and modeling of mechanisms of acute lethality. During the workshop, each speaker led the group through a series of charge questions to determine clear next steps to progress the aims of the workshop. Participants concluded that a variety of approaches will be needed and should be applied in a tiered fashion. Non-testing approaches, including waiving tests, computational models for single chemicals, and calculating the acute lethality of mixtures based on the LD50 values of mixture components, could be used for some assessments now, especially in the very toxic or non-toxic classification ranges. Agencies can develop policies indicating contexts under which mathematical approaches for mixtures assessment are acceptable; to expand applicability, poorly predicted mixtures should be examined to understand discrepancies and adapt the approach. Transparency and an understanding of the variability of in vivo approaches are crucial to facilitate regulatory application of new approaches. In a replacement strategy, mechanistically based in vitro or in silico models will be needed to support non-testing approaches especially for highly acutely toxic chemicals. The workshop discussed approaches that can be used in the immediate or near term for some applications and identified remaining actions needed to implement approaches to fully replace the use of animals for acute systemic toxicity testing.


Asunto(s)
Pruebas de Toxicidad Aguda , Animales , Simulación por Computador , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA