Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 131
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Chem Res Toxicol ; 37(5): 685-697, 2024 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-38598715

RESUMEN

Xenobiotic metabolism is a key consideration in evaluating the hazards and risks posed by environmental chemicals. A number of software tools exist that are capable of simulating metabolites, but each reports its predictions in a different format and with varying levels of detail. This makes comparing the performance and coverage of the tools a practical challenge. To address this shortcoming, we developed a metabolic simulation framework called MetSim, which comprises three main components. A graph-based schema was developed to allow metabolism information to be harmonized. The schema was implemented in MongoDB to store and retrieve metabolic graphs for subsequent analysis. MetSim currently includes an application programming interface for four metabolic simulators: BioTransformer, the OECD Toolbox, EPA's chemical transformation simulator (CTS), and tissue metabolism simulator (TIMES). Lastly, MetSim provides functions to help evaluate simulator performance for specific data sets. In this study, a set of 112 drugs with 432 reported metabolites were compiled, and predictions were made using the 4 simulators. Fifty-nine of the 112 drugs were taken from the Small Molecule Pathway Database, with the remainder sourced from the literature. The human models within BioTransformer and CTS (Phase I only) and the rat models within TIMES and the OECD Toolbox (Phase I only) were used to make predictions for the chemicals in the data set. The recall and precision (recall, precision) ranked in order of highest recall for each individual tool were CTS (0.54, 0.017), BioTransformer (0.50, 0.008), Toolbox in vitro (0.40, 0.144), TIMES in vivo (0.40, 0.133), Toolbox in vivo (0.40, 0.118), and TIMES in vitro (0.39, 0.128). Combining all of the model predictions together increased the overall recall (0.73, 0.008). MetSim enabled insights into the performance and coverage of in silico metabolic simulators to be more efficiently derived, which in turn should aid future efforts to evaluate other data sets.


Asunto(s)
Simulación por Computador , Programas Informáticos , Xenobióticos , Xenobióticos/metabolismo , Humanos , Animales
2.
Chem Res Toxicol ; 37(4): 600-619, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38498310

RESUMEN

Regulatory authorities aim to organize substances into groups to facilitate prioritization within hazard and risk assessment processes. Often, such chemical groupings are not explicitly defined by structural rules or physicochemical property information. This is largely due to how these groupings are developed, namely, a manual expert curation process, which in turn makes updating and refining groupings, as new substances are evaluated, a practical challenge. Herein, machine learning methods were leveraged to build models that could preliminarily assign substances to predefined groups. A set of 86 groupings containing 2,184 substances as published on the European Chemicals Agency (ECHA) website were mapped to the U.S. Environmental Protection Agency (EPA) Distributed Toxicity Structure Database (DSSTox) content to extract chemical and structural information. Substances were represented using Morgan fingerprints, and two machine learning approaches were used to classify test substances into 56 groups containing at least 10 substances with a structural representation in the data set: k-nearest neighbor (kNN) and random forest (RF), that led to mean 5-fold cross-validation test accuracies (average F1 scores) of 0.781 and 0.853, respectively. With a 9% improvement, the RF classifier was significantly more accurate than KNN (p-value = 0.001). The approach offers promise as a means of the initial profiling of new substances into predefined groups to facilitate prioritization efforts and streamline the assessment of new substances when earlier groupings are available. The algorithm to fit and use these models has been made available in the accompanying repository, thereby enabling both use of the produced models and refitting of these models, as new groupings become available by regulatory authorities or industry.


Asunto(s)
Algoritmos , Aprendizaje Automático , Estados Unidos , United States Environmental Protection Agency , Bases de Datos Factuales
3.
Toxicol Appl Pharmacol ; 468: 116513, 2023 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-37044265

RESUMEN

'Cell Painting' is an imaging-based high-throughput phenotypic profiling (HTPP) method in which cultured cells are fluorescently labeled to visualize subcellular structures (i.e., nucleus, nucleoli, endoplasmic reticulum, cytoskeleton, Golgi apparatus / plasma membrane and mitochondria) and to quantify morphological changes in response to chemicals or other perturbagens. HTPP is a high-throughput and cost-effective bioactivity screening method that detects effects associated with many different molecular mechanisms in an untargeted manner, enabling rapid in vitro hazard assessment for thousands of chemicals. Here, 1201 chemicals from the ToxCast library were screened in concentration-response up to ∼100 µM in human U-2 OS cells using HTPP. A phenotype altering concentration (PAC) was estimated for chemicals active in the tested range. PACs tended to be higher than lower bound potency values estimated from a broad collection of targeted high-throughput assays, but lower than the threshold for cytotoxicity. In vitro to in vivo extrapolation (IVIVE) was used to estimate administered equivalent doses (AEDs) based on PACs for comparison to human exposure predictions. AEDs for 18/412 chemicals overlapped with predicted human exposures. Phenotypic profile information was also leveraged to identify putative mechanisms of action and group chemicals. Of 58 known nuclear receptor modulators, only glucocorticoids and retinoids produced characteristic profiles; and both receptor types are expressed in U-2 OS cells. Thirteen chemicals with profile similarity to glucocorticoids were tested in a secondary screen and one chemical, pyrene, was confirmed by an orthogonal gene expression assay as a novel putative GR modulating chemical. Most active chemicals demonstrated profiles not associated with a known mechanism-of-action. However, many structurally related chemicals produced similar profiles, with exceptions such as diniconazole, whose profile differed from other active conazoles. Overall, the present study demonstrates how HTPP can be applied in screening-level chemical assessments through a series of examples and brief case studies.


Asunto(s)
Bioensayo , Ensayos Analíticos de Alto Rendimiento , Humanos , Medición de Riesgo/métodos , Ensayos Analíticos de Alto Rendimiento/métodos , Células Cultivadas , Bioensayo/métodos
4.
Chem Res Toxicol ; 36(3): 508-534, 2023 03 20.
Artículo en Inglés | MEDLINE | ID: mdl-36862450

RESUMEN

The term PFAS encompasses diverse per- and polyfluorinated alkyl (and increasingly aromatic) chemicals spanning industrial processes, commercial uses, environmental occurrence, and potential concerns. With increased chemical curation, currently exceeding 14,000 structures in the PFASSTRUCTV5 inventory on EPA's CompTox Chemicals Dashboard, has come increased motivation to profile, categorize, and analyze the PFAS structure space using modern cheminformatics approaches. Making use of the publicly available ToxPrint chemotypes and ChemoTyper application, we have developed a new PFAS-specific fingerprint set consisting of 129 TxP_PFAS chemotypes coded in CSRML, a chemical-based XML-query language. These are split into two groups, the first containing 56 mostly bond-type ToxPrints modified to incorporate attachment to either a CF group or F atom to enforce proximity to the fluorinated portion of the chemical. This focus resulted in a dramatic reduction in TxP_PFAS chemotype counts relative to the corresponding ToxPrint counts (averaging 54%). The remaining TxP_PFAS chemotypes consist of various lengths and types of fluorinated chains, rings, and bonding patterns covering indications of branching, alternate halogenation, and fluorotelomers. Both groups of chemotypes are well represented across the PFASSTRUCT inventory. Using the ChemoTyper application, we show how the TxP_PFAS chemotypes can be visualized, filtered, and used to profile the PFASSTRUCT inventory, as well as to construct chemically intuitive, structure-based PFAS categories. Lastly, we used a selection of expert-based PFAS categories from the OECD Global PFAS list to evaluate a small set of analogous structure-based TxP_PFAS categories. TxP_PFAS chemotypes were able to recapitulate the expert-based PFAS category concepts based on clearly defined structure rules that can be computationally implemented and reproducibly applied to process large PFAS inventories without need to consult an expert. The TxP_PFAS chemotypes have the potential to support computational modeling, harmonize PFAS structure-based categories, facilitate communication, and allow for more efficient and chemically informed exploration of PFAS chemicals moving forward.


Asunto(s)
Quimioinformática , Fluorocarburos , Simulación por Computador , Fluorocarburos/química
5.
Chem Res Toxicol ; 36(3): 402-419, 2023 03 20.
Artículo en Inglés | MEDLINE | ID: mdl-36821828

RESUMEN

Per- and polyfluoroalkyl substances (PFAS) are a diverse set of commercial chemicals widely detected in humans and the environment. However, only a limited number of PFAS are associated with epidemiological or experimental data for hazard identification. To provide developmental neurotoxicity (DNT) hazard information, the work herein employed DNT new approach methods (NAMs) to generate in vitro screening data for a set of 160 PFAS. The DNT NAMs battery was comprised of the microelectrode array neuronal network formation assay (NFA) and high-content imaging (HCI) assays to evaluate proliferation, apoptosis, and neurite outgrowth. The majority of PFAS (118/160) were inactive or equivocal in the DNT NAMs, leaving 42 active PFAS that decreased measures of neural network connectivity and neurite length. Analytical quality control indicated 43/118 inactive PFAS samples and 10/42 active PFAS samples were degraded; as such, careful interpretation is required as some negatives may have been due to loss of the parent PFAS, and some actives may have resulted from a mixture of parent and/or degradants of PFAS. PFAS containing a perfluorinated carbon (C) chain length ≥8, a high C:fluorine ratio, or a carboxylic acid moiety were more likely to be bioactive in the DNT NAMs. Of the PFAS positives in DNT NAMs, 85% were also active in other EPA ToxCast assays, whereas 79% of PFAS inactives in the DNT NAMs were active in other assays. These data demonstrate that a subset of PFAS perturb neurodevelopmental processes in vitro and suggest focusing future studies of DNT on PFAS with certain structural feature descriptors.


Asunto(s)
Fluorocarburos , Síndromes de Neurotoxicidad , Humanos , Síndromes de Neurotoxicidad/metabolismo , Neuronas/metabolismo , Proyección Neuronal , Apoptosis , Fluorocarburos/toxicidad
6.
Regul Toxicol Pharmacol ; 137: 105293, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36414101

RESUMEN

The assessment of human health hazards posed by chemicals traditionally relies on toxicity studies in experimental animals. However, most chemicals currently in commerce do not meet the minimum data requirements for hazard identification and dose-response analysis in human health risk assessment. Previously, we introduced a read-across framework designed to address data gaps for screening-level assessment of chemicals with insufficient in vivo toxicity information (Wang et al., 2012). It relies on inference by analogy from suitably tested source analogues to a target chemical, based on structural, toxicokinetic, and toxicodynamic similarity. This approach has been used for dose-response assessment of data-poor chemicals relevant to the U.S. EPA's Superfund program. We present herein, case studies of the application of this framework, highlighting specific examples of the use of biological similarity for chemical grouping and quantitative read-across. Based on practical knowledge and technological advances in the fields of read-across and predictive toxicology, we propose a revised framework. It includes important considerations for problem formulation, systematic review, target chemical analysis, analogue identification, analogue evaluation, and incorporation of new approach methods. This work emphasizes the integration of systematic methods and alternative toxicity testing data and tools in chemical risk assessment to inform regulatory decision-making.


Asunto(s)
Medición de Riesgo , Animales , Humanos , Medición de Riesgo/métodos
7.
Regul Toxicol Pharmacol ; 142: 105434, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37302561

RESUMEN

A challenging step in human risk assessment of chemicals is the derivation of safe thresholds. The Threshold of Toxicological Concern (TTC) concept is one option which can be used for the safety evaluation of substances with a limited toxicity dataset, but for which exposure is sufficiently low. The application of the TTC is generally accepted for orally or dermally exposed cosmetic ingredients; however, these values cannot directly be applied to the inhalation route because of differences in exposure route versus oral and dermal. Various approaches of an inhalation TTC concept have been developed over recent years to address this. A virtual workshop organized by Cosmetics Europe, held in November 2020, shared the current state of the science regarding the applicability of existing inhalation TTC approaches to cosmetic ingredients. Key discussion points included the need for an inhalation TTC for local respiratory tract effects in addition to a systemic inhalation TTC, dose metrics, database building and quality of studies, definition of the chemical space and applicability domain, and classification of chemicals with different potencies. The progress made to date in deriving inhalation TTCs was highlighted, as well as the next steps envisaged to develop them further for regulatory acceptance and use.


Asunto(s)
Cosméticos , Humanos , Nivel sin Efectos Adversos Observados , Cosméticos/toxicidad , Sistema Respiratorio , Europa (Continente) , Medición de Riesgo
8.
Bioinformatics ; 37(19): 3380-3381, 2021 Oct 11.
Artículo en Inglés | MEDLINE | ID: mdl-33772575

RESUMEN

MOTIVATION: Generalized Read-Across (GenRA) is a data-driven approach to estimate physico-chemical, biological or eco-toxicological properties of chemicals by inference from analogues. GenRA attempts to mimic a human expert's manual read-across reasoning for filling data gaps about new chemicals from known chemicals with an interpretable and automated approach based on nearest-neighbors. A key objective of GenRA is to systematically explore different choices of input data selection and neighborhood definition to objectively evaluate predictive performance of automated read-across estimates of chemical properties. RESULTS: We have implemented genra-py as a python package that can be freely used for chemical safety analysis and risk assessment applications. Automated read-across prediction in genra-py conforms to the scikit-learn machine learning library's estimator design pattern, making it easy to use and integrate in computational pipelines. We demonstrate the data-driven application of genra-py to address two key human health risk assessment problems namely: hazard identification and point of departure estimation. AVAILABILITY AND IMPLEMENTATION: The package is available from github.com/i-shah/genra-py.

9.
Chem Res Toxicol ; 35(11): 1929-1949, 2022 11 21.
Artículo en Inglés | MEDLINE | ID: mdl-36301716

RESUMEN

Screening new compounds for potential bioactivities against cellular targets is vital for drug discovery and chemical safety. Transcriptomics offers an efficient approach for assessing global gene expression changes, but interpreting chemical mechanisms from these data is often challenging. Connectivity mapping is a potential data-driven avenue for linking chemicals to mechanisms based on the observation that many biological processes are associated with unique gene expression signatures (gene signatures). However, mining the effects of a chemical on gene signatures for biological mechanisms is challenging because transcriptomic data contain thousands of noisy genes. New connectivity mapping approaches seeking to distinguish signal from noise continue to be developed, spurred by the promise of discovering chemical mechanisms, new drugs, and disease targets from burgeoning transcriptomic data. Here, we analyze these approaches in terms of diverse transcriptomic technologies, public databases, gene signatures, pattern-matching algorithms, and statistical evaluation criteria. To navigate the complexity of connectivity mapping, we propose a harmonized scheme to coherently organize and compare published workflows. We first standardize concepts underlying transcriptomic profiles and gene signatures based on various transcriptomic technologies such as microarrays, RNA-Seq, and L1000 and discuss the widely used data sources such as Gene Expression Omnibus, ArrayExpress, and MSigDB. Next, we generalize connectivity mapping as a pattern-matching task for finding similarity between a query (e.g., transcriptomic profile for new chemical) and a reference (e.g., gene signature of known target). Published pattern-matching approaches fall into two main categories: vector-based use metrics like correlation, Jaccard index, etc., and aggregation-based use parametric and nonparametric statistics (e.g., gene set enrichment analysis). The statistical methods for evaluating the performance of different approaches are described, along with comparisons reported in the literature on benchmark transcriptomic data sets. Lastly, we review connectivity mapping applications in toxicology and offer guidance on evaluating chemical-induced toxicity with concentration-response transcriptomic data. In addition to serving as a high-level guide and tutorial for understanding and implementing connectivity mapping workflows, we hope this review will stimulate new algorithms for evaluating chemical safety and drug discovery using transcriptomic data.


Asunto(s)
Perfilación de la Expresión Génica , Transcriptoma , Perfilación de la Expresión Génica/métodos , Flujo de Trabajo , Bases de Datos Factuales , Descubrimiento de Drogas
10.
Regul Toxicol Pharmacol ; 135: 105249, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36041585

RESUMEN

Structure-activity relationships (SARs) in toxicology have enabled the formation of structural rules which, when coded as structural alerts, are essential tools in in silico toxicology. Whilst other in silico methods have approaches for their evaluation, there is no formal process to assess the confidence that may be associated with a structural alert. This investigation proposes twelve criteria to assess the uncertainty associated with structural alerts, allowing for an assessment of confidence. The criteria are based around the stated purpose, description of the chemistry, toxicology and mechanism, performance and coverage, as well as corroborating and supporting evidence of the alert. Alerts can be given a confidence assessment and score, enabling the identification of areas where more information may be beneficial. The scheme to evaluate structural alerts was placed in the context of various use cases for industrial and regulatory applications. The analysis of alerts, and consideration of the evaluation scheme, identifies the different characteristics an alert may have, such as being highly specific or generic. These characteristics may determine when an alert can be used for specific uses such as identification of analogues for read-across or hazard identification.


Asunto(s)
Incertidumbre , Relación Estructura-Actividad
11.
Regul Toxicol Pharmacol ; 109: 104505, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31639428

RESUMEN

The Toxic Substances Control Act (TSCA) mandates the US EPA perform risk-based prioritisation of chemicals in commerce and then, for high-priority substances, develop risk evaluations that integrate toxicity data with exposure information. One approach being considered for data poor chemicals is the Threshold of Toxicological Concern (TTC). Here, TTC values derived using oral (sub)chronic No Observable (Adverse) Effect Level (NO(A)EL) data from the EPA's Toxicity Values database (ToxValDB) were compared with published TTC values from Munro et al. (1996). A total of 4554 chemicals with structures present in ToxValDB were assigned into their respective TTC categories using the Toxtree software tool, of which toxicity data was available for 1304 substances. The TTC values derived from ToxValDB were similar, but not identical to the Munro TTC values: Cramer I ((ToxValDB) 37.3 c. f. (Munro) 30 µg/kg-day), Cramer II (34.6 c. f. 9.1 µg/kg-day) and Cramer III (3.9 c. f. 1.5 µg/kg-day). Cramer III 5th percentile values were found to be statistically different. Chemical features of the two Cramer III datasets were evaluated to account for the differences. TTC values derived from this expanded dataset substantiated the original TTC values, reaffirming the utility of TTC as a promising tool in a risk-based prioritisation approach.


Asunto(s)
Sustancias Peligrosas/normas , Valores Limites del Umbral , Toxicología/normas , United States Environmental Protection Agency/normas , Bases de Datos Factuales , Sustancias Peligrosas/toxicidad , Humanos , Nivel sin Efectos Adversos Observados , Medición de Riesgo/normas , Programas Informáticos , Pruebas de Toxicidad Crónica/normas , Pruebas de Toxicidad Subcrónica/normas , Toxicología/legislación & jurisprudencia , Estados Unidos
12.
Regul Toxicol Pharmacol ; 101: 12-23, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-30359698

RESUMEN

The application of toxic equivalency factors (TEFs) or toxic units to estimate toxic potencies for mixtures of chemicals which contribute to a biological effect through a common mechanism is one approach for filling data gaps. Toxic Equivalents (TEQ) have been used to express the toxicity of dioxin-like compounds (i.e., dioxins, furans, and dioxin-like polychlorinated biphenyls (PCBs)) in terms of the most toxic form of dioxin: 2,3,7,8-tetrachlorodibenzo-p-dioxin (2,3,7,8-TCDD). This study sought to integrate two data gap filling techniques, quantitative structure-activity relationships (QSARs) and TEFs, to predict neurotoxicity TEQs for PCBs. Simon et al. (2007) previously derived neurotoxic equivalent (NEQ) values for a dataset of 87 PCB congeners, of which 83 congeners had experimental data. These data were taken from a set of four different studies measuring different effects related to neurotoxicity, each of which tested overlapping subsets of the 83 PCB congeners. The goals of the current study were to: (i) evaluate an alternative neurotoxic equivalent factor (NEF) derivations from an expanded dataset, relative to those derived by Simon et al. and (ii) develop QSAR models to provide NEF estimates for the large number of untested PCB congeners. The models used multiple linear regression, support vector regression, k-nearest neighbor and random forest algorithms within a 5-fold cross validation scheme and position-specific chlorine substitution patterns on the biphenyl scaffold as descriptors. Alternative NEF values were derived but the resulting QSAR models had relatively low predictivity (RMSE ∼0.24). This was mostly driven by the large uncertainties in the underlying data and NEF values. The derived NEFs and the QSAR predicted NEFs to fill data gaps should be applied with caution.


Asunto(s)
Contaminantes Ambientales/toxicidad , Síndromes de Neurotoxicidad , Bifenilos Policlorados/toxicidad , Animales , Encéfalo/metabolismo , Calcio/metabolismo , Dopamina/metabolismo , Contaminantes Ambientales/química , Células PC12 , Bifenilos Policlorados/química , Proteína Quinasa C/metabolismo , Relación Estructura-Actividad Cuantitativa , Ratas , Medición de Riesgo , Canal Liberador de Calcio Receptor de Rianodina/metabolismo
13.
Regul Toxicol Pharmacol ; 106: 278-291, 2019 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-31121201

RESUMEN

Traditional approaches for chemical risk assessment cannot keep pace with the number of substances requiring assessment. Thus, in a global effort to expedite and modernize chemical risk assessment, New Approach Methodologies (NAMs) are being explored and developed. Included in this effort is the OECD Integrated Approaches for Testing and Assessment (IATA) program, which provides a forum for OECD member countries to develop and present case studies illustrating the application of NAM in various risk assessment contexts. Here, we present an IATA case study for the prediction of estrogenic potential of three target phenols: 4-tert-butylphenol, 2,4-di-tert-butylphenol and octabenzone. Key features of this IATA include the use of two computational approaches for analogue selection for read-across, data collected from traditional and NAM sources, and a workflow to generate predictions regarding the targets' ability to bind the estrogen receptor (ER). Endocrine disruption can occur when a chemical substance mimics the activity of natural estrogen by binding to the ER and, if potency and exposure are sufficient, alters the function of the endocrine system to cause adverse effects. The data indicated that of the three target substances that were considered herein, 4-tert-butylphenol is a potential endocrine disruptor. Further, this IATA illustrates that the NAM approach explored is health protective when compared to in vivo endpoints traditionally used for human health risk assessment.


Asunto(s)
Benzofenonas/farmacología , Fenoles/farmacología , Receptores de Estrógenos/metabolismo , Benzofenonas/química , Humanos , Estructura Molecular , Fenoles/química , Medición de Riesgo
14.
Regul Toxicol Pharmacol ; 106: 197-209, 2019 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-31078681

RESUMEN

Read-across is a well-established data gap-filling technique applied for regulatory purposes. In US Environmental Protection Agency's New Chemicals Program under TSCA, read-across has been used extensively for decades, however the extent of application and acceptance of read-across among U.S. federal agencies is less clear. In an effort to build read-across capacity, raise awareness of the state of the science, and work towards a harmonization of read-across approaches across U.S. agencies, a new read-across workgroup was established under the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM). This is one of several ad hoc groups ICCVAM has convened to implement the ICCVAM Strategic Roadmap. In this article, we outline the charge and scope of the workgroup and summarize the current applications, tools used, and needs of the agencies represented on the workgroup for read-across. Of the agencies surveyed, the Environmental Protection Agency had the greatest experience in using read-across whereas other agencies indicated that they would benefit from gaining a perspective of the landscape of the tools and available guidance. Two practical case studies are also described to illustrate how the read-across approaches applied by two agencies vary on account of decision context.


Asunto(s)
Pruebas de Toxicidad , United States Government Agencies , Humanos , Estados Unidos , United States Environmental Protection Agency/organización & administración
15.
J Appl Toxicol ; 38(1): 41-50, 2018 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-28543848

RESUMEN

There is an expectation that to meet regulatory requirements, and avoid or minimize animal testing, integrated approaches to testing and assessment will be needed that rely on assays representing key events (KEs) in the skin sensitization adverse outcome pathway. Three non-animal assays have been formally validated and regulatory adopted: the direct peptide reactivity assay (DPRA), the KeratinoSens™ assay and the human cell line activation test (h-CLAT). There have been many efforts to develop integrated approaches to testing and assessment with the "two out of three" approach attracting much attention. Here a set of 271 chemicals with mouse, human and non-animal sensitization test data was evaluated to compare the predictive performances of the three individual non-animal assays, their binary combinations and the "two out of three" approach in predicting skin sensitization potential. The most predictive approach was to use both the DPRA and h-CLAT as follows: (1) perform DPRA - if positive, classify as sensitizing, and (2) if negative, perform h-CLAT - a positive outcome denotes a sensitizer, a negative, a non-sensitizer. With this approach, 85% (local lymph node assay) and 93% (human) of non-sensitizer predictions were correct, whereas the "two out of three" approach had 69% (local lymph node assay) and 79% (human) of non-sensitizer predictions correct. The findings are consistent with the argument, supported by published quantitative mechanistic models that only the first KE needs to be modeled. All three assays model this KE to an extent. The value of using more than one assay depends on how the different assays compensate for each other's technical limitations. Copyright © 2017 John Wiley & Sons, Ltd.


Asunto(s)
Alternativas a las Pruebas en Animales , Dermatitis Alérgica por Contacto/etiología , Sustancias Peligrosas/toxicidad , Piel/efectos de los fármacos , Pruebas de Toxicidad/métodos , Animales , Línea Celular , Dermatitis Alérgica por Contacto/inmunología , Humanos , Ensayo del Nódulo Linfático Local , Ratones , Valor Predictivo de las Pruebas , Piel/inmunología
16.
Chem Res Toxicol ; 30(11): 2046-2059, 2017 11 20.
Artículo en Inglés | MEDLINE | ID: mdl-28768096

RESUMEN

Animal testing alone cannot practically evaluate the health hazard posed by tens of thousands of environmental chemicals. Computational approaches making use of high-throughput experimental data may provide more efficient means to predict chemical toxicity. Here, we use a supervised machine learning strategy to systematically investigate the relative importance of study type, machine learning algorithm, and type of descriptor on predicting in vivo repeat-dose toxicity at the organ-level. A total of 985 compounds were represented using chemical structural descriptors, ToxPrint chemotype descriptors, and bioactivity descriptors from ToxCast in vitro high-throughput screening assays. Using ToxRefDB, a total of 35 target organ outcomes were identified that contained at least 100 chemicals (50 positive and 50 negative). Supervised machine learning was performed using Naïve Bayes, k-nearest neighbor, random forest, classification and regression trees, and support vector classification approaches. Model performance was assessed based on F1 scores using 5-fold cross-validation with balanced bootstrap replicates. Fixed effects modeling showed the variance in F1 scores was explained mostly by target organ outcome, followed by descriptor type, machine learning algorithm, and interactions between these three factors. A combination of bioactivity and chemical structure or chemotype descriptors were the most predictive. Model performance improved with more chemicals (up to a maximum of 24%), and these gains were correlated (ρ = 0.92) with the number of chemicals. Overall, the results demonstrate that a combination of bioactivity and chemical descriptors can accurately predict a range of target organ toxicity outcomes in repeat-dose studies, but specific experimental and methodologic improvements may increase predictivity.


Asunto(s)
Contaminantes Ambientales/toxicidad , Aprendizaje Automático , Pruebas de Toxicidad/métodos , Animales , Bases de Datos Factuales , Contaminantes Ambientales/química , Humanos , Modelos Biológicos , Relación Estructura-Actividad Cuantitativa
17.
Regul Toxicol Pharmacol ; 86: 74-92, 2017 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-28242142

RESUMEN

Predictive toxicity models rely on large amounts of accurate in vivo data. Here, we analyze the quality of in vivo data from the U.S. EPA Toxicity Reference Database (ToxRefDB), using chemical-induced anemia as an example. Considerations include variation in experimental conditions, changes in terminology over time, distinguishing negative from missing results, observer and diagnostic bias, and data transcription errors. Within ToxRefDB, we use hematological data on 658 chemicals tested in one or more of 1738 studies (subchronic rat or chronic rat, mouse, or dog). Anemia was reported most frequently in the rat subchronic studies, followed by chronic studies in dog, rat, and then mouse. Concordance between studies for a positive finding of anemia (same chemical, different laboratories) ranged from 90% (rat subchronic predicting rat chronic) to 40% (mouse chronic predicting rat chronic). Concordance increased with manual curation by 20% on average. We identified 49 chemicals that showed an anemia phenotype in at least two species. These included 14 aniline moiety-containing compounds that were further analyzed for their potential to be metabolically transformed into substituted anilines, which are known anemia-causing chemicals. This analysis should help inform future use of in vivo databases for model development.


Asunto(s)
Anemia/inducido químicamente , Minería de Datos , Bases de Datos Factuales , Pruebas de Toxicidad Crónica/estadística & datos numéricos , Pruebas de Toxicidad Subcrónica/estadística & datos numéricos , Animales , Perros , Ratones , Ratas , Valores de Referencia , Estudios Retrospectivos , Estados Unidos , United States Environmental Protection Agency
18.
J Appl Toxicol ; 37(1): 105-116, 2017 01.
Artículo en Inglés | MEDLINE | ID: mdl-27283458

RESUMEN

It is widely accepted that substances must have a molecular weight (MW) < 500 to penetrate effectively through the skin to induce sensitization. Roberts et al. (2012. Contact Dermatitis 68: 32-41) evaluated a data set of 699 substances taken from the TIMES-SS expert system and identified that of the 13 substances with a MW > 500, five were sensitizers. This provided good evidence to refute such a MW 500 threshold. While Roberts et al. (2012) made a convincing case that the MW > 500 cut-off was not a true requirement for sensitization, the number of counter examples identified were too few to draw any statistical conclusions. This updated analysis systematically interrogated a large repository of sensitization information collected under the EU REACH regulation. A data set of 2904 substances that had been tested for skin sensitization, using guinea pigs and/or mice were collected. The data set contained 197 substances with a MW > 500; 33 of these were skin sensitizers. Metal containing complexes, reaction products and mixtures were excluded from further consideration. The final set of 14 sensitizers substantiated the original findings. The study also assessed whether the same reaction chemistry principles established for low MW sensitizers applied to chemicals with a MW > 500. The existing reaction chemistry considerations were found appropriate to rationalize the sensitization behaviour of the 14 sensitizers with a MW > 500. The existence of the MW 500 threshold, based on the widespread misconception that the ability to penetrate efficiently the stratum corneum is a key determinant of skin sensitization potential and potency, was refuted. Copyright © 2016 John Wiley & Sons, Ltd.


Asunto(s)
Alérgenos/química , Dermatitis Alérgica por Contacto/inmunología , Compuestos Orgánicos/química , Alérgenos/inmunología , Alérgenos/toxicidad , Animales , Bases de Datos Factuales , Dermatitis Alérgica por Contacto/etiología , Peso Molecular , Compuestos Orgánicos/inmunología , Compuestos Orgánicos/toxicidad
19.
J Appl Toxicol ; 37(1): 117-127, 2017 01.
Artículo en Inglés | MEDLINE | ID: mdl-27357739

RESUMEN

It is widely accepted that substances that cannot penetrate through the skin will not be sensitizers. LogKow and molecular weight (MW) have been used to set thresholds for sensitization potential. Highly hydrophilic substances e.g. LogKow ≤ 1 are expected not to penetrate effectively to induce sensitization. To investigate whether LogKow >1 is a true requirement for sensitization, a large dataset of substances that had been evaluated for their skin sensitization potential under Registration, Evaluation, Authorisation and restriction of CHemicals (REACH), together with available measured LogKow values was compiled using the OECD eChemPortal. The incidence of sensitizers relative to non-sensitizers above and below a LogKow of 1 was explored. Reaction chemistry principles were used to explain the sensitization observed for the subset of substances with a LogKow ≤0. 1482 substances were identified with skin sensitization data and measured LogKow values. 525 substances had a measured LogKow ≤ 1, 100 of those were sensitizers. There was no significant difference in the incidence of sensitizers above and below a LogKow of 1. Reaction chemistry principles that had been established for lower MW and more hydrophobic substances were found to be still valid in rationalizing the skin sensitizers with a LogKow ≤ 0. The LogKow threshold arises from the widespread misconception that the ability to efficiently penetrate the stratum corneum is a key determinant of sensitization potential and potency. Copyright © 2016 John Wiley & Sons, Ltd.


Asunto(s)
Alérgenos/farmacocinética , Dermatitis Alérgica por Contacto/inmunología , Compuestos Orgánicos/farmacocinética , Absorción Cutánea/efectos de los fármacos , Alérgenos/inmunología , Alérgenos/toxicidad , Animales , Bases de Datos Factuales , Dermatitis Alérgica por Contacto/etiología , Modelos Químicos , Compuestos Orgánicos/inmunología , Compuestos Orgánicos/toxicidad , Permeabilidad
20.
Chem Res Toxicol ; 29(4): 438-51, 2016 Apr 18.
Artículo en Inglés | MEDLINE | ID: mdl-26686752

RESUMEN

Exploiting non-testing approaches to predict toxicity early in the drug discovery development cycle is a helpful component in minimizing expensive drug failures due to toxicity being identified in late development or even during clinical trials. Changes in regulations in the industrial chemicals and cosmetics sectors in recent years have prompted a significant number of advances in the development, application, and assessment of non-testing approaches, such as (Q)SARs. Many efforts have also been undertaken to establish guiding principles for performing read-across within category and analogue approaches. This review offers a perspective, as taken from these sectors, of the current status of non-testing approaches, their evolution in light of the advances in high-throughput approaches and constructs such as adverse outcome pathways, and their potential relevance for drug discovery. It also proposes a workflow for how non-testing approaches could be practically integrated within testing and assessment strategies.


Asunto(s)
Alternativas a las Pruebas en Animales/métodos , Descubrimiento de Drogas/métodos , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos , Animales , Simulación por Computador , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos/diagnóstico , Sistemas Especialistas , Humanos , Modelos Biológicos , Preparaciones Farmacéuticas/química , Relación Estructura-Actividad Cuantitativa , Medición de Riesgo , Pruebas de Toxicidad/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA