Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 96
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Chem Res Toxicol ; 37(6): 878-893, 2024 Jun 17.
Artículo en Inglés | MEDLINE | ID: mdl-38736322

RESUMEN

Adaptive stress response pathways (SRPs) restore cellular homeostasis following perturbation but may activate terminal outcomes like apoptosis, autophagy, or cellular senescence if disruption exceeds critical thresholds. Because SRPs hold the key to vital cellular tipping points, they are targeted for therapeutic interventions and assessed as biomarkers of toxicity. Hence, we are developing a public database of chemicals that perturb SRPs to enable new data-driven tools to improve public health. Here, we report on the automated text-mining pipeline we used to build and curate the first version of this database. We started with 100 reference SRP chemicals gathered from published biomarker studies to bootstrap the database. Second, we used information retrieval to find co-occurrences of reference chemicals with SRP terms in PubMed abstracts and determined pairwise mutual information thresholds to filter biologically relevant relationships. Third, we applied these thresholds to find 1206 putative SRP perturbagens within thousands of substances in the Library of Integrated Network-Based Cellular Signatures (LINCS). To assign SRP activity to LINCS chemicals, domain experts had to manually review at least three publications for each of 1206 chemicals out of 181,805 total abstracts. To accomplish this efficiently, we implemented a machine learning approach to predict SRP classifications from texts to prioritize abstracts. In 5-fold cross-validation testing with a corpus derived from the 100 reference chemicals, artificial neural networks performed the best (F1-macro = 0.678) and prioritized 2479/181,805 abstracts for expert review, which resulted in 457 chemicals annotated with SRP activities. An independent analysis of enriched mechanisms of action and chemical use class supported the text-mined chemical associations (p < 0.05): heat shock inducers were linked with HSP90 and DNA damage inducers to topoisomerase inhibition. This database will enable novel applications of LINCS data to evaluate SRP activities and to further develop tools for biomedical information extraction from the literature.


Asunto(s)
Minería de Datos , Humanos , Estrés Fisiológico/efectos de los fármacos , Bases de Datos Factuales
2.
Chem Res Toxicol ; 35(11): 1929-1949, 2022 11 21.
Artículo en Inglés | MEDLINE | ID: mdl-36301716

RESUMEN

Screening new compounds for potential bioactivities against cellular targets is vital for drug discovery and chemical safety. Transcriptomics offers an efficient approach for assessing global gene expression changes, but interpreting chemical mechanisms from these data is often challenging. Connectivity mapping is a potential data-driven avenue for linking chemicals to mechanisms based on the observation that many biological processes are associated with unique gene expression signatures (gene signatures). However, mining the effects of a chemical on gene signatures for biological mechanisms is challenging because transcriptomic data contain thousands of noisy genes. New connectivity mapping approaches seeking to distinguish signal from noise continue to be developed, spurred by the promise of discovering chemical mechanisms, new drugs, and disease targets from burgeoning transcriptomic data. Here, we analyze these approaches in terms of diverse transcriptomic technologies, public databases, gene signatures, pattern-matching algorithms, and statistical evaluation criteria. To navigate the complexity of connectivity mapping, we propose a harmonized scheme to coherently organize and compare published workflows. We first standardize concepts underlying transcriptomic profiles and gene signatures based on various transcriptomic technologies such as microarrays, RNA-Seq, and L1000 and discuss the widely used data sources such as Gene Expression Omnibus, ArrayExpress, and MSigDB. Next, we generalize connectivity mapping as a pattern-matching task for finding similarity between a query (e.g., transcriptomic profile for new chemical) and a reference (e.g., gene signature of known target). Published pattern-matching approaches fall into two main categories: vector-based use metrics like correlation, Jaccard index, etc., and aggregation-based use parametric and nonparametric statistics (e.g., gene set enrichment analysis). The statistical methods for evaluating the performance of different approaches are described, along with comparisons reported in the literature on benchmark transcriptomic data sets. Lastly, we review connectivity mapping applications in toxicology and offer guidance on evaluating chemical-induced toxicity with concentration-response transcriptomic data. In addition to serving as a high-level guide and tutorial for understanding and implementing connectivity mapping workflows, we hope this review will stimulate new algorithms for evaluating chemical safety and drug discovery using transcriptomic data.


Asunto(s)
Perfilación de la Expresión Génica , Transcriptoma , Perfilación de la Expresión Génica/métodos , Flujo de Trabajo , Bases de Datos Factuales , Descubrimiento de Drogas
3.
Chem Res Toxicol ; 34(2): 189-216, 2021 02 15.
Artículo en Inglés | MEDLINE | ID: mdl-33140634

RESUMEN

Since 2009, the Tox21 project has screened ∼8500 chemicals in more than 70 high-throughput assays, generating upward of 100 million data points, with all data publicly available through partner websites at the United States Environmental Protection Agency (EPA), National Center for Advancing Translational Sciences (NCATS), and National Toxicology Program (NTP). Underpinning this public effort is the largest compound library ever constructed specifically for improving understanding of the chemical basis of toxicity across research and regulatory domains. Each Tox21 federal partner brought specialized resources and capabilities to the partnership, including three approximately equal-sized compound libraries. All Tox21 data generated to date have resulted from a confluence of ideas, technologies, and expertise used to design, screen, and analyze the Tox21 10K library. The different programmatic objectives of the partners led to three distinct, overlapping compound libraries that, when combined, not only covered a diversity of chemical structures, use-categories, and properties but also incorporated many types of compound replicates. The history of development of the Tox21 "10K" chemical library and data workflows implemented to ensure quality chemical annotations and allow for various reproducibility assessments are described. Cheminformatics profiling demonstrates how the three partner libraries complement one another to expand the reach of each individual library, as reflected in coverage of regulatory lists, predicted toxicity end points, and physicochemical properties. ToxPrint chemotypes (CTs) and enrichment approaches further demonstrate how the combined partner libraries amplify structure-activity patterns that would otherwise not be detected. Finally, CT enrichments are used to probe global patterns of activity in combined ToxCast and Tox21 activity data sets relative to test-set size and chemical versus biological end point diversity, illustrating the power of CT approaches to discern patterns in chemical-activity data sets. These results support a central premise of the Tox21 program: A collaborative merging of programmatically distinct compound libraries would yield greater rewards than could be achieved separately.


Asunto(s)
Bibliotecas de Moléculas Pequeñas/toxicidad , Pruebas de Toxicidad , Ensayos Analíticos de Alto Rendimiento , Humanos , Estados Unidos , United States Environmental Protection Agency
4.
Toxicol Appl Pharmacol ; 380: 114683, 2019 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-31325560

RESUMEN

Recent technological advances have moved the field of toxicogenomics from reliance on microarray platforms to high-throughput transcriptomic (HTTr) technologies that measure global gene expression. Gene expression biomarkers are emerging as useful tools for interpreting gene expression profiles to identify perturbations of targets of xenobiotic chemicals including those that act as endocrine disrupting chemicals (EDCs). Gene expression biomarkers are lists of similarly-regulated genes identified in global gene expression comparisons of cells or tissues 1) exposed to known agonists or antagonists of the transcription factor (TF) and 2) after expression of the TF itself is knocked down/knocked out or overexpressed. Estrogen receptor α (ERα) and androgen receptor (AR) biomarkers have been shown to be very accurate at identifying both agonists (94-97%) and antagonists (93-98%) in microarray data derived from human breast or prostate cancer cell lines. Importantly, the biomarkers have been shown to accurately replicate the results of computational models that predict ERα or AR modulation using multiple ToxCast HT screening assays. An integrated screening strategy using sets of biomarkers that simultaneously predict various EDC targets in relevant cell lines should simplify chemical screening without sacrificing accuracy. The biomarker predictions can be put into the context of the adverse outcome pathway framework to help prioritize chemicals with the greatest risk of potential adverse outcomes in the endocrine systems of animals and people.


Asunto(s)
Disruptores Endocrinos/toxicidad , Receptores Androgénicos/genética , Receptores de Estrógenos/genética , Animales , Biomarcadores/análisis , Expresión Génica , Humanos
5.
Toxicol Appl Pharmacol ; 380: 114707, 2019 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-31404555

RESUMEN

New approach methodologies (NAMs) in chemical safety evaluation are being explored to address the current public health implications of human environmental exposures to chemicals with limited or no data for assessment. For over a decade since a push toward "Toxicity Testing in the 21st Century," the field has focused on massive data generation efforts to inform computational approaches for preliminary hazard identification, adverse outcome pathways that link molecular initiating events and key events to apical outcomes, and high-throughput approaches to risk-based ratios of bioactivity and exposure to inform relative priority and safety assessment. Projects like the interagency Tox21 program and the US EPA ToxCast program have generated dose-response information on thousands of chemicals, identified and aggregated information from legacy systems, and created tools for access and analysis. The resulting information has been used to develop computational models as viable options for regulatory applications. This progress has introduced challenges in data management that are new, but not unique, to toxicology. Some of the key questions require critical thinking and solutions to promote semantic interoperability, including: (1) identification of bioactivity information from NAMs that might be related to a biological process; (2) identification of legacy hazard information that might be related to a key event or apical outcomes of interest; and, (3) integration of these NAM and traditional data for computational modeling and prediction of complex apical outcomes such as carcinogenesis. This work reviews a number of toxicology-related efforts specifically related to bioactivity and toxicological data interoperability based on the goals established by Findable, Accessible, Interoperable, and Reusable (FAIR) Data Principles. These efforts are essential to enable better integration of NAM and traditional toxicology information to support data-driven toxicology applications.


Asunto(s)
Biología Computacional/métodos , Medición de Riesgo/métodos , Toxicología/métodos , Animales , Exposición a Riesgos Ambientales/efectos adversos , Contaminantes Ambientales/toxicidad , Predisposición Genética a la Enfermedad , Humanos , Fenotipo
6.
Environ Sci Technol ; 53(21): 12793-12802, 2019 Nov 05.
Artículo en Inglés | MEDLINE | ID: mdl-31560848

RESUMEN

QSAR modeling can be used to aid testing prioritization of the thousands of chemical substances for which no ecological toxicity data are available. We drew on the U.S. Environmental Protection Agency's ECOTOX database with additional data from ECHA to build a large data set containing in vivo test data on fish for thousands of chemical substances. This was used to create QSAR models to predict two types of end points: acute LC50 (median lethal concentration) and points of departure similar to the NOEC (no observed effect concentration) for any duration (named the "LC50" and "NOEC" models, respectively). These models used study covariates, such as species and exposure route, as features to facilitate the simultaneous use of varied data types. A novel method of substituting taxonomy groups for species dummy variables was introduced to maximize generalizability to different species. A stacked ensemble of three machine learning methods-random forest, gradient boosted trees, and support vector regression-was implemented to best make use of a large data set with many descriptors. The LC50 and NOEC models predicted end points within 1 order of magnitude 81% and 76% of the time, respectively, and had RMSEs of roughly 0.83 and 0.98 log10(mg/L), respectively. Benchmarks against the existing TEST and ECOSAR tools suggest improved prediction accuracy.


Asunto(s)
Peces , Relación Estructura-Actividad Cuantitativa , Animales , Dosificación Letal Mediana
7.
Regul Toxicol Pharmacol ; 109: 104510, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31676319

RESUMEN

Synthesis of 11 steroid hormones in human adrenocortical carcinoma cells (H295R) was measured in a high-throughput steroidogenesis assay (HT-H295R) for 656 chemicals in concentration-response as part of the US Environmental Protection Agency's ToxCast program. This work extends previous analysis of the HT-H295R dataset and model by examining the utility of a novel prioritization metric based on the Mahalanobis distance that reduced these 11-dimensional data to 1-dimension via calculation of a mean Mahalanobis distance (mMd) at each chemical concentration screened for all hormone measures available. Herein, we evaluated the robustness of mMd values, and demonstrate that covariance and variance of the hormones measured appear independent of the chemicals screened and are inherent to the assay; the Type I error rate of the mMd method is less than 1%; and, absolute fold changes (up or down) of 1.5 to 2-fold have sufficient power for statistical significance. As a case study, we examined hormone responses for aromatase inhibitors in the HT-H295R assay and found high concordance with other ToxCast assays for known aromatase inhibitors. Finally, we used mMd and other ToxCast cytotoxicity data to demonstrate prioritization of the most selective and active chemicals as candidates for further in vitro or in silico screening.


Asunto(s)
Inhibidores de la Aromatasa/toxicidad , Disruptores Endocrinos/toxicidad , Ensayos Analíticos de Alto Rendimiento/métodos , Esteroides/biosíntesis , Línea Celular Tumoral , Interpretación Estadística de Datos , Conjuntos de Datos como Asunto , Reacciones Falso Positivas , Ensayos Analíticos de Alto Rendimiento/normas , Humanos , Reproducibilidad de los Resultados , Pruebas de Toxicidad/métodos , Pruebas de Toxicidad/normas , Estados Unidos , United States Environmental Protection Agency/normas
8.
Regul Toxicol Pharmacol ; 106: 278-291, 2019 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-31121201

RESUMEN

Traditional approaches for chemical risk assessment cannot keep pace with the number of substances requiring assessment. Thus, in a global effort to expedite and modernize chemical risk assessment, New Approach Methodologies (NAMs) are being explored and developed. Included in this effort is the OECD Integrated Approaches for Testing and Assessment (IATA) program, which provides a forum for OECD member countries to develop and present case studies illustrating the application of NAM in various risk assessment contexts. Here, we present an IATA case study for the prediction of estrogenic potential of three target phenols: 4-tert-butylphenol, 2,4-di-tert-butylphenol and octabenzone. Key features of this IATA include the use of two computational approaches for analogue selection for read-across, data collected from traditional and NAM sources, and a workflow to generate predictions regarding the targets' ability to bind the estrogen receptor (ER). Endocrine disruption can occur when a chemical substance mimics the activity of natural estrogen by binding to the ER and, if potency and exposure are sufficient, alters the function of the endocrine system to cause adverse effects. The data indicated that of the three target substances that were considered herein, 4-tert-butylphenol is a potential endocrine disruptor. Further, this IATA illustrates that the NAM approach explored is health protective when compared to in vivo endpoints traditionally used for human health risk assessment.


Asunto(s)
Benzofenonas/farmacología , Fenoles/farmacología , Receptores de Estrógenos/metabolismo , Benzofenonas/química , Humanos , Estructura Molecular , Fenoles/química , Medición de Riesgo
9.
Bioinformatics ; 33(4): 618-620, 2017 02 15.
Artículo en Inglés | MEDLINE | ID: mdl-27797781

RESUMEN

Motivation: Large high-throughput screening (HTS) efforts are widely used in drug development and chemical toxicity screening. Wide use and integration of these data can benefit from an efficient, transparent and reproducible data pipeline. Summary: The tcpl R package and its associated MySQL database provide a generalized platform for efficiently storing, normalizing and dose-response modeling of large high-throughput and high-content chemical screening data. The novel dose-response modeling algorithm has been tested against millions of diverse dose-response series, and robustly fits data with outliers and cytotoxicity-related signal loss. Availability and Implementation: tcpl is freely available on the Comprehensive R Archive Network under the GPL-2 license. Contact: martin.matt@epa.gov.


Asunto(s)
Evaluación Preclínica de Medicamentos/métodos , Ensayos Analíticos de Alto Rendimiento/métodos , Modelos Biológicos , Programas Informáticos , Pruebas de Toxicidad/métodos , Algoritmos , Simulación por Computador , Relación Dosis-Respuesta a Droga
10.
J Chem Inf Model ; 57(1): 36-49, 2017 01 23.
Artículo en Inglés | MEDLINE | ID: mdl-28006899

RESUMEN

There are little available toxicity data on the vast majority of chemicals in commerce. High-throughput screening (HTS) studies, such as those being carried out by the U.S. Environmental Protection Agency (EPA) ToxCast program in partnership with the federal Tox21 research program, can generate biological data to inform models for predicting potential toxicity. However, physicochemical properties are also needed to model environmental fate and transport, as well as exposure potential. The purpose of the present study was to generate an open-source quantitative structure-property relationship (QSPR) workflow to predict a variety of physicochemical properties that would have cross-platform compatibility to integrate into existing cheminformatics workflows. In this effort, decades-old experimental property data sets available within the EPA EPI Suite were reanalyzed using modern cheminformatics workflows to develop updated QSPR models capable of supplying computationally efficient, open, and transparent HTS property predictions in support of environmental modeling efforts. Models were built using updated EPI Suite data sets for the prediction of six physicochemical properties: octanol-water partition coefficient (logP), water solubility (logS), boiling point (BP), melting point (MP), vapor pressure (logVP), and bioconcentration factor (logBCF). The coefficient of determination (R2) between the estimated values and experimental data for the six predicted properties ranged from 0.826 (MP) to 0.965 (BP), with model performance for five of the six properties exceeding those from the original EPI Suite models. The newly derived models can be employed for rapid estimation of physicochemical properties within an open-source HTS workflow to inform fate and toxicity prediction models of environmental chemicals.


Asunto(s)
Fenómenos Químicos , Simulación por Computador , Contaminantes Ambientales/química , Aprendizaje Automático , Contaminantes Ambientales/toxicidad , Informática , Relación Estructura-Actividad Cuantitativa , Solubilidad , Temperatura de Transición , Presión de Vapor , Agua/química
11.
Environ Sci Technol ; 51(15): 8713-8724, 2017 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-28671818

RESUMEN

Current environmental monitoring approaches focus primarily on chemical occurrence. However, based on concentration alone, it can be difficult to identify which compounds may be of toxicological concern and should be prioritized for further monitoring, in-depth testing, or management. This can be problematic because toxicological characterization is lacking for many emerging contaminants. New sources of high-throughput screening (HTS) data, such as the ToxCast database, which contains information for over 9000 compounds screened through up to 1100 bioassays, are now available. Integrated analysis of chemical occurrence data with HTS data offers new opportunities to prioritize chemicals, sites, or biological effects for further investigation based on concentrations detected in the environment linked to relative potencies in pathway-based bioassays. As a case study, chemical occurrence data from a 2012 study in the Great Lakes Basin along with the ToxCast effects database were used to calculate exposure-activity ratios (EARs) as a prioritization tool. Technical considerations of data processing and use of the ToxCast database are presented and discussed. EAR prioritization identified multiple sites, biological pathways, and chemicals that warrant further investigation. Prioritized bioactivities from the EAR analysis were linked to discrete adverse outcome pathways to identify potential adverse outcomes and biomarkers for use in subsequent monitoring efforts.


Asunto(s)
Bioensayo , Monitoreo del Ambiente , Ensayos Analíticos de Alto Rendimiento , Pruebas de Toxicidad , Biomarcadores , Great Lakes Region , Humanos , Lagos
12.
Regul Toxicol Pharmacol ; 86: 74-92, 2017 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-28242142

RESUMEN

Predictive toxicity models rely on large amounts of accurate in vivo data. Here, we analyze the quality of in vivo data from the U.S. EPA Toxicity Reference Database (ToxRefDB), using chemical-induced anemia as an example. Considerations include variation in experimental conditions, changes in terminology over time, distinguishing negative from missing results, observer and diagnostic bias, and data transcription errors. Within ToxRefDB, we use hematological data on 658 chemicals tested in one or more of 1738 studies (subchronic rat or chronic rat, mouse, or dog). Anemia was reported most frequently in the rat subchronic studies, followed by chronic studies in dog, rat, and then mouse. Concordance between studies for a positive finding of anemia (same chemical, different laboratories) ranged from 90% (rat subchronic predicting rat chronic) to 40% (mouse chronic predicting rat chronic). Concordance increased with manual curation by 20% on average. We identified 49 chemicals that showed an anemia phenotype in at least two species. These included 14 aniline moiety-containing compounds that were further analyzed for their potential to be metabolically transformed into substituted anilines, which are known anemia-causing chemicals. This analysis should help inform future use of in vivo databases for model development.


Asunto(s)
Anemia/inducido químicamente , Minería de Datos , Bases de Datos Factuales , Pruebas de Toxicidad Crónica/estadística & datos numéricos , Pruebas de Toxicidad Subcrónica/estadística & datos numéricos , Animales , Perros , Ratones , Ratas , Valores de Referencia , Estudios Retrospectivos , Estados Unidos , United States Environmental Protection Agency
13.
Regul Toxicol Pharmacol ; 91: 39-49, 2017 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-28993267

RESUMEN

The US EPA is charged with screening chemicals for their ability to be endocrine disruptors through interaction with the estrogen, androgen and thyroid axes. The agency is exploring the use of high-throughput in vitro assays to use in the Endocrine Disruptor Screening Program (EDSP), potentially as replacements for lower-throughput in vitro and in vivo tests. The first replacement is an integrated computational and experimental model for estrogen receptor (ER) activity, to be used as an alternative to the EDSP Tier 1 in vitro ER binding and transactivation assays and the in vivo uterotrophic bioassay. The ER agonist model uses a set of 16 in vitro assays that incorporate multiple technologies and cell lines and probe multiple points in the ER pathway. Here, we demonstrate that subsets of assays with as few as 4 assays can predict the activity of all 1811 chemicals tested with accuracy equivalent to that of the full 16-assay model. The prediction accuracy against reference chemicals is higher than that of the full chemical set, partly because the larger set contains many chemicals that can cause a variety of types of assay interference There are multiple accurate assay subsets, allowing flexibility in the construction of a multiplexed assay battery. We also discuss the issue of challenging chemicals, i.e. those that can give false positive results in certain assays, and could hence be more problematic when only a few assays are used.


Asunto(s)
Disruptores Endocrinos/química , Disruptores Endocrinos/farmacología , Estrógenos/agonistas , Andrógenos/metabolismo , Bioensayo/métodos , Línea Celular Tumoral , Ensayos Analíticos de Alto Rendimiento/métodos , Humanos , Receptores de Estrógenos/metabolismo , Estados Unidos , United States Environmental Protection Agency
14.
Chem Res Toxicol ; 29(8): 1225-51, 2016 08 15.
Artículo en Inglés | MEDLINE | ID: mdl-27367298

RESUMEN

The U.S. Environmental Protection Agency's (EPA) ToxCast program is testing a large library of Agency-relevant chemicals using in vitro high-throughput screening (HTS) approaches to support the development of improved toxicity prediction models. Launched in 2007, Phase I of the program screened 310 chemicals, mostly pesticides, across hundreds of ToxCast assay end points. In Phase II, the ToxCast library was expanded to 1878 chemicals, culminating in the public release of screening data at the end of 2013. Subsequent expansion in Phase III has resulted in more than 3800 chemicals actively undergoing ToxCast screening, 96% of which are also being screened in the multi-Agency Tox21 project. The chemical library unpinning these efforts plays a central role in defining the scope and potential application of ToxCast HTS results. The history of the phased construction of EPA's ToxCast library is reviewed, followed by a survey of the library contents from several different vantage points. CAS Registry Numbers are used to assess ToxCast library coverage of important toxicity, regulatory, and exposure inventories. Structure-based representations of ToxCast chemicals are then used to compute physicochemical properties, substructural features, and structural alerts for toxicity and biotransformation. Cheminformatics approaches using these varied representations are applied to defining the boundaries of HTS testability, evaluating chemical diversity, and comparing the ToxCast library to potential target application inventories, such as used in EPA's Endocrine Disruption Screening Program (EDSP). Through several examples, the ToxCast chemical library is demonstrated to provide comprehensive coverage of the knowledge domains and target inventories of potential interest to EPA. Furthermore, the varied representations and approaches presented here define local chemistry domains potentially worthy of further investigation (e.g., not currently covered in the testing library or defined by toxicity "alerts") to strategically support data mining and predictive toxicology modeling moving forward.


Asunto(s)
Toxicología
15.
Regul Toxicol Pharmacol ; 79: 12-24, 2016 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-27174420

RESUMEN

Read-across is a popular data gap filling technique within category and analogue approaches for regulatory purposes. Acceptance of read-across remains an ongoing challenge with several efforts underway for identifying and addressing uncertainties. Here we demonstrate an algorithmic, automated approach to evaluate the utility of using in vitro bioactivity data ("bioactivity descriptors", from EPA's ToxCast program) in conjunction with chemical descriptor information to derive local validity domains (specific sets of nearest neighbors) to facilitate read-across for up to ten in vivo repeated dose toxicity study types. Over 3239 different chemical structure descriptors were generated for a set of 1778 chemicals and supplemented with the outcomes from 821 in vitro assays. The read-across prediction of toxicity for 600 chemicals with in vivo data was based on the similarity weighted endpoint outcomes of its nearest neighbors. The approach enabled a performance baseline for read-across predictions of specific study outcomes to be established. Bioactivity descriptors were often found to be more predictive of in vivo toxicity outcomes than chemical descriptors or a combination of both. This generalized read-across (GenRA) forms a first step in systemizing read-across predictions and serves as a useful component of a screening level hazard assessment for new untested chemicals.


Asunto(s)
Algoritmos , Bioensayo , Minería de Datos/métodos , Contaminantes Ambientales/química , Contaminantes Ambientales/toxicidad , Reconocimiento de Normas Patrones Automatizadas , Preparaciones Farmacéuticas/química , Pruebas de Toxicidad/métodos , Alternativas a las Pruebas en Animales , Animales , Análisis por Conglomerados , Bases de Datos Factuales , Humanos , Estructura Molecular , Relación Estructura-Actividad Cuantitativa , Reproducibilidad de los Resultados , Medición de Riesgo
16.
Chem Res Toxicol ; 28(4): 738-51, 2015 Apr 20.
Artículo en Inglés | MEDLINE | ID: mdl-25697799

RESUMEN

The U.S. Tox21 and EPA ToxCast program screen thousands of environmental chemicals for bioactivity using hundreds of high-throughput in vitro assays to build predictive models of toxicity. We represented chemicals based on bioactivity and chemical structure descriptors, then used supervised machine learning to predict in vivo hepatotoxic effects. A set of 677 chemicals was represented by 711 in vitro bioactivity descriptors (from ToxCast assays), 4,376 chemical structure descriptors (from QikProp, OpenBabel, PaDEL, and PubChem), and three hepatotoxicity categories (from animal studies). Hepatotoxicants were defined by rat liver histopathology observed after chronic chemical testing and grouped into hypertrophy (161), injury (101) and proliferative lesions (99). Classifiers were built using six machine learning algorithms: linear discriminant analysis (LDA), Naïve Bayes (NB), support vector machines (SVM), classification and regression trees (CART), k-nearest neighbors (KNN), and an ensemble of these classifiers (ENSMB). Classifiers of hepatotoxicity were built using chemical structure descriptors, ToxCast bioactivity descriptors, and hybrid descriptors. Predictive performance was evaluated using 10-fold cross-validation testing and in-loop, filter-based, feature subset selection. Hybrid classifiers had the best balanced accuracy for predicting hypertrophy (0.84 ± 0.08), injury (0.80 ± 0.09), and proliferative lesions (0.80 ± 0.10). Though chemical and bioactivity classifiers had a similar balanced accuracy, the former were more sensitive, and the latter were more specific. CART, ENSMB, and SVM classifiers performed the best, and nuclear receptor activation and mitochondrial functions were frequently found in highly predictive classifiers of hepatotoxicity. ToxCast and ToxRefDB provide the largest and richest publicly available data sets for mining linkages between the in vitro bioactivity of environmental chemicals and their adverse histopathological outcomes. Our findings demonstrate the utility of high-throughput assays for characterizing rodent hepatotoxicants, the benefit of using hybrid representations that integrate bioactivity and chemical structure, and the need for objective evaluation of classification performance.


Asunto(s)
Hígado/efectos de los fármacos , Pruebas de Toxicidad , Animales , Técnicas In Vitro , Estructura Molecular , Ratas
17.
Environ Sci Technol ; 49(14): 8804-14, 2015 Jul 21.
Artículo en Inglés | MEDLINE | ID: mdl-26066997

RESUMEN

The U.S. Environmental Protection Agency (EPA) is considering high-throughput and computational methods to evaluate the endocrine bioactivity of environmental chemicals. Here we describe a multistep, performance-based validation of new methods and demonstrate that these new tools are sufficiently robust to be used in the Endocrine Disruptor Screening Program (EDSP). Results from 18 estrogen receptor (ER) ToxCast high-throughput screening assays were integrated into a computational model that can discriminate bioactivity from assay-specific interference and cytotoxicity. Model scores range from 0 (no activity) to 1 (bioactivity of 17ß-estradiol). ToxCast ER model performance was evaluated for reference chemicals, as well as results of EDSP Tier 1 screening assays in current practice. The ToxCast ER model accuracy was 86% to 93% when compared to reference chemicals and predicted results of EDSP Tier 1 guideline and other uterotrophic studies with 84% to 100% accuracy. The performance of high-throughput assays and ToxCast ER model predictions demonstrates that these methods correctly identify active and inactive reference chemicals, provide a measure of relative ER bioactivity, and rapidly identify chemicals with potential endocrine bioactivities for additional screening and testing. EPA is accepting ToxCast ER model data for 1812 chemicals as alternatives for EDSP Tier 1 ER binding, ER transactivation, and uterotrophic assays.


Asunto(s)
Simulación por Computador , Disruptores Endocrinos/análisis , Ensayos Analíticos de Alto Rendimiento/métodos , Receptores de Estrógenos/metabolismo , Animales , Compuestos de Bencidrilo/análisis , Disruptores Endocrinos/toxicidad , Fenoles/análisis , Ratas , Reproducibilidad de los Resultados , Pruebas de Toxicidad
18.
Environ Sci Technol ; 48(15): 8706-16, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24960280

RESUMEN

Thousands of environmental chemicals are subject to regulatory review for their potential to be endocrine disruptors (ED). In vitro high-throughput screening (HTS) assays have emerged as a potential tool for prioritizing chemicals for ED-related whole-animal tests. In this study, 1814 chemicals including pesticide active and inert ingredients, industrial chemicals, food additives, and pharmaceuticals were evaluated in a panel of 13 in vitro HTS assays. The panel of in vitro assays interrogated multiple end points related to estrogen receptor (ER) signaling, namely binding, agonist, antagonist, and cell growth responses. The results from the in vitro assays were used to create an ER Interaction Score. For 36 reference chemicals, an ER Interaction Score >0 showed 100% sensitivity and 87.5% specificity for classifying potential ER activity. The magnitude of the ER Interaction Score was significantly related to the potency classification of the reference chemicals (p < 0.0001). ERα/ERß selectivity was also evaluated, but relatively few chemicals showed significant selectivity for a specific isoform. When applied to a broader set of chemicals with in vivo uterotrophic data, the ER Interaction Scores showed 91% sensitivity and 65% specificity. Overall, this study provides a novel method for combining in vitro concentration response data from multiple assays and, when applied to a large set of ER data, accurately predicted estrogenic responses and demonstrated its utility for chemical prioritization.


Asunto(s)
Disruptores Endocrinos/análisis , Receptor alfa de Estrógeno/agonistas , Receptor beta de Estrógeno/agonistas , Ensayos Analíticos de Alto Rendimiento , Modelos Químicos , Algoritmos , Animales , Bioensayo , Antagonistas de Estrógenos/análisis , Receptor alfa de Estrógeno/antagonistas & inhibidores , Receptor beta de Estrógeno/antagonistas & inhibidores , Estrógenos/análisis , Humanos , Células MCF-7 , Plaguicidas , Transducción de Señal
19.
Toxicology ; 501: 153694, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-38043774

RESUMEN

Multiple new approach methods (NAMs) are being developed to rapidly screen large numbers of chemicals to aid in hazard evaluation and risk assessments. High-throughput transcriptomics (HTTr) in human cell lines has been proposed as a first-tier screening approach for determining the types of bioactivity a chemical can cause (activation of specific targets vs. generalized cell stress) and for calculating transcriptional points of departure (tPODs) based on changes in gene expression. In the present study, we examine a range of computational methods to calculate tPODs from HTTr data, using six data sets in which MCF7 cells cultured in two different media formulations were treated with a panel of 44 chemicals for 3 different exposure durations (6, 12, 24 hr). The tPOD calculation methods use data at the level of individual genes and gene set signatures, and compare data processed using the ToxCast Pipeline 2 (tcplfit2), BMDExpress and PLIER (Pathway Level Information ExtractoR). Methods were evaluated by comparing to in vitro PODs from a validated set of high-throughput screening (HTS) assays for a set of estrogenic compounds. Key findings include: (1) for a given chemical and set of experimental conditions, tPODs calculated by different methods can vary by several orders of magnitude; (2) tPODs are at least as sensitive to computational methods as to experimental conditions; (3) in comparison to an external reference set of PODs, some methods give generally higher values, principally PLIER and BMDExpress; and (4) the tPODs from HTTr in this one cell type are mostly higher than the overall PODs from a broad battery of targeted in vitro ToxCast assays, reflecting the need to test chemicals in multiple cell types and readout technologies for in vitro hazard screening.


Asunto(s)
Perfilación de la Expresión Génica , Transcriptoma , Humanos , Ensayos Analíticos de Alto Rendimiento/métodos , Estrógenos , Línea Celular , Medición de Riesgo/métodos
20.
Toxics ; 12(4)2024 Apr 05.
Artículo en Inglés | MEDLINE | ID: mdl-38668494

RESUMEN

Per- and polyfluoroalkyl substances (PFAS) are widely used, and their fluorinated state contributes to unique uses and stability but also long half-lives in the environment and humans. PFAS have been shown to be toxic, leading to immunosuppression, cancer, and other adverse health outcomes. Only a small fraction of the PFAS in commerce have been evaluated for toxicity using in vivo tests, which leads to a need to prioritize which compounds to examine further. Here, we demonstrate a prioritization approach that combines human biomonitoring data (blood concentrations) with bioactivity data (concentrations at which bioactivity is observed in vitro) for 31 PFAS. The in vitro data are taken from a battery of cell-based assays, mostly run on human cells. The result is a Bioactive Concentration to Blood Concentration Ratio (BCBCR), similar to a margin of exposure (MoE). Chemicals with low BCBCR values could then be prioritized for further risk assessment. Using this method, two of the PFAS, PFOA (Perfluorooctanoic Acid) and PFOS (Perfluorooctane Sulfonic Acid), have BCBCR values < 1 for some populations. An additional 9 PFAS have BCBCR values < 100 for some populations. This study shows a promising approach to screening level risk assessments of compounds such as PFAS that are long-lived in humans and other species.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA