Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Chem Res Toxicol ; 34(2): 189-216, 2021 02 15.
Artículo en Inglés | MEDLINE | ID: mdl-33140634

RESUMEN

Since 2009, the Tox21 project has screened ∼8500 chemicals in more than 70 high-throughput assays, generating upward of 100 million data points, with all data publicly available through partner websites at the United States Environmental Protection Agency (EPA), National Center for Advancing Translational Sciences (NCATS), and National Toxicology Program (NTP). Underpinning this public effort is the largest compound library ever constructed specifically for improving understanding of the chemical basis of toxicity across research and regulatory domains. Each Tox21 federal partner brought specialized resources and capabilities to the partnership, including three approximately equal-sized compound libraries. All Tox21 data generated to date have resulted from a confluence of ideas, technologies, and expertise used to design, screen, and analyze the Tox21 10K library. The different programmatic objectives of the partners led to three distinct, overlapping compound libraries that, when combined, not only covered a diversity of chemical structures, use-categories, and properties but also incorporated many types of compound replicates. The history of development of the Tox21 "10K" chemical library and data workflows implemented to ensure quality chemical annotations and allow for various reproducibility assessments are described. Cheminformatics profiling demonstrates how the three partner libraries complement one another to expand the reach of each individual library, as reflected in coverage of regulatory lists, predicted toxicity end points, and physicochemical properties. ToxPrint chemotypes (CTs) and enrichment approaches further demonstrate how the combined partner libraries amplify structure-activity patterns that would otherwise not be detected. Finally, CT enrichments are used to probe global patterns of activity in combined ToxCast and Tox21 activity data sets relative to test-set size and chemical versus biological end point diversity, illustrating the power of CT approaches to discern patterns in chemical-activity data sets. These results support a central premise of the Tox21 program: A collaborative merging of programmatically distinct compound libraries would yield greater rewards than could be achieved separately.


Asunto(s)
Bibliotecas de Moléculas Pequeñas/toxicidad , Pruebas de Toxicidad , Ensayos Analíticos de Alto Rendimiento , Humanos , Estados Unidos , United States Environmental Protection Agency
2.
Anal Bioanal Chem ; 413(30): 7495-7508, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34648052

RESUMEN

With the increasing availability of high-resolution mass spectrometers, suspect screening and non-targeted analysis are becoming popular compound identification tools for environmental researchers. Samples of interest often contain a large (unknown) number of chemicals spanning the detectable mass range of the instrument. In an effort to separate these chemicals prior to injection into the mass spectrometer, a chromatography method is often utilized. There are numerous types of gas and liquid chromatographs that can be coupled to commercially available mass spectrometers. Depending on the type of instrument used for analysis, the researcher is likely to observe a different subset of compounds based on the amenability of those chemicals to the selected experimental techniques and equipment. It would be advantageous if this subset of chemicals could be predicted prior to conducting the experiment, in order to minimize potential false-positive and false-negative identifications. In this work, we utilize experimental datasets to predict the amenability of chemical compounds to detection with liquid chromatography-electrospray ionization-mass spectrometry (LC-ESI-MS). The assembled dataset totals 5517 unique chemicals either explicitly detected or not detected with LC-ESI-MS. The resulting detected/not-detected matrix has been modeled using specific molecular descriptors to predict which chemicals are amenable to LC-ESI-MS, and to which form(s) of ionization. Random forest models, including a measure of the applicability domain of the model for both positive and negative modes of the electrospray ionization source, were successfully developed. The outcome of this work will help to inform future suspect screening and non-targeted analyses of chemicals by better defining the potential LC-ESI-MS detectable chemical landscape of interest.

3.
Altern Lab Anim ; 49(5): 197-208, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-34836462

RESUMEN

Across multiple sectors, including food, cosmetics and pharmaceutical industries, there is a need to predict the potential effects of xenobiotics. These effects are determined by the intrinsic ability of the substance, or its derivatives, to interact with the biological system, and its concentration-time profile at the target site. Physiologically-based kinetic (PBK) models can predict organ-level concentration-time profiles, however, the models are time and resource intensive to generate de novo. Read-across is an approach used to reduce or replace animal testing, wherein information from a data-rich chemical is used to make predictions for a data-poor chemical. The recent increase in published PBK models presents the opportunity to use a read-across approach for PBK modelling, that is, to use PBK model information from one chemical to inform the development or evaluation of a PBK model for a similar chemical. Essential to this process, is identifying the chemicals for which a PBK model already exists. Herein, the results of a systematic review of existing PBK models, compliant with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) format, are presented. Model information, including species, sex, life-stage, route of administration, software platform used and the availability of model equations, was captured for 7541 PBK models. Chemical information (identifiers and physico-chemical properties) has also been recorded for 1150 unique chemicals associated with these models. This PBK model data set has been made readily accessible, as a Microsoft Excel® spreadsheet, providing a valuable resource for those developing, using or evaluating PBK models in industry, academia and the regulatory sectors.


Asunto(s)
Modelos Biológicos , Programas Informáticos , Animales , Cinética , Medición de Riesgo
4.
Anal Bioanal Chem ; 411(4): 853-866, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-30519961

RESUMEN

In August 2015, the US Environmental Protection Agency (EPA) convened a workshop entitled "Advancing non-targeted analyses of xenobiotic chemicals in environmental and biological media." The purpose of the workshop was to bring together the foremost experts in non-targeted analysis (NTA) to discuss the state-of-the-science for generating, interpreting, and exchanging NTA measurement data. During the workshop, participants discussed potential designs for a collaborative project that would use EPA resources, including the ToxCast library of chemical substances, the DSSTox database, and the CompTox Chemicals Dashboard, to evaluate cutting-edge NTA methods. That discussion was the genesis of EPA's Non-Targeted Analysis Collaborative Trial (ENTACT). Nearly 30 laboratories have enrolled in ENTACT and used a variety of chromatography, mass spectrometry, and data processing approaches to characterize ten synthetic chemical mixtures, three standardized media (human serum, house dust, and silicone band) extracts, and thousands of individual substances. Initial results show that nearly all participants have detected and reported more compounds in the mixtures than were intentionally added, with large inter-lab variability in the number of reported compounds. A comparison of gas and liquid chromatography results shows that the majority (45.3%) of correctly identified compounds were detected by only one method and 15.4% of compounds were not identified. Finally, a limited set of true positive identifications indicates substantial differences in observable chemical space when employing disparate separation and ionization techniques as part of NTA workflows. This article describes the genesis of ENTACT, all study methods and materials, and an analysis of results submitted to date. Graphical abstract ᅟ.


Asunto(s)
Conducta Cooperativa , Contaminantes Ambientales/análisis , Proyectos de Investigación , Xenobióticos/análisis , Cromatografía/métodos , Mezclas Complejas , Recolección de Datos , Polvo , Educación , Exposición a Riesgos Ambientales , Contaminantes Ambientales/normas , Contaminantes Ambientales/toxicidad , Humanos , Laboratorios/organización & administración , Espectrometría de Masas/métodos , Control de Calidad , Estándares de Referencia , Suero , Siliconas/química , Estados Unidos , United States Environmental Protection Agency , Xenobióticos/normas , Xenobióticos/toxicidad
5.
Anal Bioanal Chem ; 411(4): 835-851, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-30612177

RESUMEN

Non-targeted analysis (NTA) methods are increasingly used to discover contaminants of emerging concern (CECs), but the extent to which these methods can support exposure and health studies remains to be determined. EPA's Non-Targeted Analysis Collaborative Trial (ENTACT) was launched in 2016 to address this need. As part of ENTACT, 1269 unique substances from EPA's ToxCast library were combined to make ten synthetic mixtures, with each mixture containing between 95 and 365 substances. As a participant in the trial, we first performed blinded NTA on each mixture using liquid chromatography (LC) coupled with high-resolution mass spectrometry (HRMS). We then performed an unblinded evaluation to identify limitations of our NTA method. Overall, at least 60% of spiked substances could be observed using selected methods. Discounting spiked isomers, true positive rates from the blinded and unblinded analyses reached a maximum of 46% and 65%, respectively. An overall reproducibility rate of 75% was observed for substances spiked into more than one mixture and observed at least once. Considerable discordance in substance identification was observed when comparing a subset of our results derived from two separate reversed-phase chromatography methods. We conclude that a single NTA method, even when optimized, can likely characterize only a subset of ToxCast substances (and, by extension, other CECs). Rigorous quality control and self-evaluation practices should be required of labs generating NTA data to support exposure and health studies. Accurate and transparent communication of performance results will best enable meaningful interpretations and defensible use of NTA data. Graphical abstract ᅟ.


Asunto(s)
Cromatografía Liquida/métodos , Cromatografía de Fase Inversa/métodos , Mezclas Complejas , Monitoreo del Ambiente/métodos , Contaminantes Ambientales/análisis , Espectrometría de Masas/métodos , Contaminantes Ambientales/toxicidad , Trazadores Radiactivos , Estándares de Referencia , Reproducibilidad de los Resultados
6.
PLoS Comput Biol ; 12(2): e1004495, 2016 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-26871706

RESUMEN

Developing physiologically-based pharmacokinetic (PBPK) models for chemicals can be resource-intensive, as neither chemical-specific parameters nor in vivo pharmacokinetic data are easily available for model construction. Previously developed, well-parameterized, and thoroughly-vetted models can be a great resource for the construction of models pertaining to new chemicals. A PBPK knowledgebase was compiled and developed from existing PBPK-related articles and used to develop new models. From 2,039 PBPK-related articles published between 1977 and 2013, 307 unique chemicals were identified for use as the basis of our knowledgebase. Keywords related to species, gender, developmental stages, and organs were analyzed from the articles within the PBPK knowledgebase. A correlation matrix of the 307 chemicals in the PBPK knowledgebase was calculated based on pharmacokinetic-relevant molecular descriptors. Chemicals in the PBPK knowledgebase were ranked based on their correlation toward ethylbenzene and gefitinib. Next, multiple chemicals were selected to represent exact matches, close analogues, or non-analogues of the target case study chemicals. Parameters, equations, or experimental data relevant to existing models for these chemicals and their analogues were used to construct new models, and model predictions were compared to observed values. This compiled knowledgebase provides a chemical structure-based approach for identifying PBPK models relevant to other chemical entities. Using suitable correlation metrics, we demonstrated that models of chemical analogues in the PBPK knowledgebase can guide the construction of PBPK models for other chemicals.


Asunto(s)
Modelos Biológicos , Preparaciones Farmacéuticas/metabolismo , Farmacocinética , Animales , Biología Computacional , Humanos , Bases del Conocimiento , Ratones , Ratas , Porcinos
7.
Chem Res Toxicol ; 29(8): 1225-51, 2016 08 15.
Artículo en Inglés | MEDLINE | ID: mdl-27367298

RESUMEN

The U.S. Environmental Protection Agency's (EPA) ToxCast program is testing a large library of Agency-relevant chemicals using in vitro high-throughput screening (HTS) approaches to support the development of improved toxicity prediction models. Launched in 2007, Phase I of the program screened 310 chemicals, mostly pesticides, across hundreds of ToxCast assay end points. In Phase II, the ToxCast library was expanded to 1878 chemicals, culminating in the public release of screening data at the end of 2013. Subsequent expansion in Phase III has resulted in more than 3800 chemicals actively undergoing ToxCast screening, 96% of which are also being screened in the multi-Agency Tox21 project. The chemical library unpinning these efforts plays a central role in defining the scope and potential application of ToxCast HTS results. The history of the phased construction of EPA's ToxCast library is reviewed, followed by a survey of the library contents from several different vantage points. CAS Registry Numbers are used to assess ToxCast library coverage of important toxicity, regulatory, and exposure inventories. Structure-based representations of ToxCast chemicals are then used to compute physicochemical properties, substructural features, and structural alerts for toxicity and biotransformation. Cheminformatics approaches using these varied representations are applied to defining the boundaries of HTS testability, evaluating chemical diversity, and comparing the ToxCast library to potential target application inventories, such as used in EPA's Endocrine Disruption Screening Program (EDSP). Through several examples, the ToxCast chemical library is demonstrated to provide comprehensive coverage of the knowledge domains and target inventories of potential interest to EPA. Furthermore, the varied representations and approaches presented here define local chemistry domains potentially worthy of further investigation (e.g., not currently covered in the testing library or defined by toxicity "alerts") to strategically support data mining and predictive toxicology modeling moving forward.


Asunto(s)
Toxicología
8.
Environ Sci Technol ; 48(21): 12750-9, 2014 Nov 04.
Artículo en Inglés | MEDLINE | ID: mdl-25222184

RESUMEN

United States Environmental Protection Agency (USEPA) researchers are developing a strategy for high-throughput (HT) exposure-based prioritization of chemicals under the ExpoCast program. These novel modeling approaches for evaluating chemicals based on their potential for biologically relevant human exposures will inform toxicity testing and prioritization for chemical risk assessment. Based on probabilistic methods and algorithms developed for The Stochastic Human Exposure and Dose Simulation Model for Multimedia, Multipathway Chemicals (SHEDS-MM), a new mechanistic modeling approach has been developed to accommodate high-throughput (HT) assessment of exposure potential. In this SHEDS-HT model, the residential and dietary modules of SHEDS-MM have been operationally modified to reduce the user burden, input data demands, and run times of the higher-tier model, while maintaining critical features and inputs that influence exposure. The model has been implemented in R; the modeling framework links chemicals to consumer product categories or food groups (and thus exposure scenarios) to predict HT exposures and intake doses. Initially, SHEDS-HT has been applied to 2507 organic chemicals associated with consumer products and agricultural pesticides. These evaluations employ data from recent USEPA efforts to characterize usage (prevalence, frequency, and magnitude), chemical composition, and exposure scenarios for a wide range of consumer products. In modeling indirect exposures from near-field sources, SHEDS-HT employs a fugacity-based module to estimate concentrations in indoor environmental media. The concentration estimates, along with relevant exposure factors and human activity data, are then used by the model to rapidly generate probabilistic population distributions of near-field indirect exposures via dermal, nondietary ingestion, and inhalation pathways. Pathway-specific estimates of near-field direct exposures from consumer products are also modeled. Population dietary exposures for a variety of chemicals found in foods are combined with the corresponding chemical-specific near-field exposure predictions to produce aggregate population exposure estimates. The estimated intake dose rates (mg/kg/day) for the 2507 chemical case-study spanned 13 orders of magnitude. SHEDS-HT successfully reproduced the pathway-specific exposure results of the higher-tier SHEDS-MM for a case-study pesticide and produced median intake doses significantly correlated (p<0.0001, R2=0.39) with medians inferred using biomonitoring data for 39 chemicals from the National Health and Nutrition Examination Survey (NHANES). Based on the favorable performance of SHEDS-HT with respect to these initial evaluations, we believe this new tool will be useful for HT prediction of chemical exposure potential.


Asunto(s)
Simulación por Computador , Dieta , Exposición a Riesgos Ambientales/estadística & datos numéricos , Contaminantes Ambientales/análisis , Modelos Estadísticos , Multimedia , Biomarcadores/análisis , Humanos , Encuestas Nutricionales , Compuestos Orgánicos/análisis , Plaguicidas/análisis , Procesos Estocásticos
9.
J Chem Inf Model ; 53(9): 2229-39, 2013 Sep 23.
Artículo en Inglés | MEDLINE | ID: mdl-23962299

RESUMEN

The ability to determine the mode of action (MOA) for a diverse group of chemicals is a critical part of ecological risk assessment and chemical regulation. However, existing MOA assignment approaches in ecotoxicology have been limited to a relatively few MOAs, have high uncertainty, or rely on professional judgment. In this study, machine based learning algorithms (linear discriminant analysis and random forest) were used to develop models for assigning aquatic toxicity MOA. These methods were selected since they have been shown to be able to correlate diverse data sets and provide an indication of the most important descriptors. A data set of MOA assignments for 924 chemicals was developed using a combination of high confidence assignments, international consensus classifications, ASTER (ASessment Tools for the Evaluation of Risk) predictions, and weight of evidence professional judgment based an assessment of structure and literature information. The overall data set was randomly divided into a training set (75%) and a validation set (25%) and then used to develop linear discriminant analysis (LDA) and random forest (RF) MOA assignment models. The LDA and RF models had high internal concordance and specificity and were able to produce overall prediction accuracies ranging from 84.5 to 87.7% for the validation set. These results demonstrate that computational chemistry approaches can be used to determine the acute toxicity MOAs across a large range of structures and mechanisms.


Asunto(s)
Organismos Acuáticos/efectos de los fármacos , Biología Computacional/métodos , Pruebas de Toxicidad , Análisis Discriminante , Relación Estructura-Actividad Cuantitativa , Reproducibilidad de los Resultados
10.
J Biomed Biotechnol ; 2012: 308381, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22619493

RESUMEN

Bionanomedicine and environmental research share need common terms and ontologies. This study applied knowledge systems, data mining, and bibliometrics used in nano-scale ADME research from 1991 to 2011. The prominence of nano-ADME in environmental research began to exceed the publication rate in medical research in 2006. That trend appears to continue as a result of the growing products in commerce using nanotechnology, that is, 5-fold growth in number of countries with nanomaterials research centers. Funding for this research virtually did not exist prior to 2002, whereas today both medical and environmental research is funded globally. Key nanoparticle research began with pharmacology and therapeutic drug-delivery and contrasting agents, but the advances have found utility in the environmental research community. As evidence ultrafine aerosols and aquatic colloids research increased 6-fold, indicating a new emphasis on environmental nanotoxicology. User-directed expert elicitation from the engineering and chemical/ADME domains can be combined with appropriate Boolean logic and queries to define the corpus of nanoparticle interest. The study combined pharmacological expertise and informatics to identify the corpus by building logical conclusions and observations. Publication records informatics can lead to an enhanced understanding the connectivity between fields, as well as overcoming the differences in ontology between the fields.


Asunto(s)
Bases de Datos Factuales , Nanoestructuras/toxicidad , Nanoestructuras/uso terapéutico , Terminología como Asunto , Pruebas de Toxicidad , Indización y Redacción de Resúmenes , Biología Computacional , Nanotecnología , Publicaciones
11.
Front Environ Sci ; 10: 1-13, 2022 Apr 05.
Artículo en Inglés | MEDLINE | ID: mdl-35936994

RESUMEN

Per- and polyfluoroalkyl substances (PFAS) are a class of man-made chemicals of global concern for many health and regulatory agencies due to their widespread use and persistence in the environment (in soil, air, and water), bioaccumulation, and toxicity. This concern has catalyzed a need to aggregate data to support research efforts that can, in turn, inform regulatory and statutory actions. An ongoing challenge regarding PFAS has been the shifting definition of what qualifies a substance to be a member of the PFAS class. There is no single definition for a PFAS, but various attempts have been made to utilize substructural definitions that either encompass broad working scopes or satisfy narrower regulatory guidelines. Depending on the size and specificity of PFAS substructural filters applied to the U.S. Environmental Protection Agency (EPA) DSSTox database, currently exceeding 900,000 unique substances, PFAS substructure-defined space can span hundreds to tens of thousands of compounds. This manuscript reports on the curation of PFAS chemicals and assembly of lists that have been made publicly available to the community via the EPA's CompTox Chemicals Dashboard. Creation of these PFAS lists required the harvesting of data from EPA and online databases, peer-reviewed publications, and regulatory documents. These data have been extracted and manually curated, annotated with structures, and made available to the community in the form of lists defined by structure filters, as well as lists comprising non-structurable PFAS, such as polymers and complex mixtures. These lists, along with their associated linkages to predicted and measured data, are fueling PFAS research efforts within the EPA and are serving as a valuable resource to the international scientific community.

12.
Bioinformatics ; 26(23): 3000-1, 2010 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-20889496

RESUMEN

MOTIVATION: Advances in the field of cheminformatics have been hindered by a lack of freely available tools. We have created Chembench, a publicly available cheminformatics portal for analyzing experimental chemical structure-activity data. Chembench provides a broad range of tools for data visualization and embeds a rigorous workflow for creating and validating predictive Quantitative Structure-Activity Relationship models and using them for virtual screening of chemical libraries to prioritize the compound selection for drug discovery and/or chemical safety assessment. AVAILABILITY: Freely accessible at: http://chembench.mml.unc.edu CONTACT: alex_tropsha@unc.edu


Asunto(s)
Descubrimiento de Drogas , Programas Informáticos , Biología Computacional , Relación Estructura-Actividad Cuantitativa , Bibliotecas de Moléculas Pequeñas , Relación Estructura-Actividad
13.
J Cheminform ; 13(1): 92, 2021 Nov 25.
Artículo en Inglés | MEDLINE | ID: mdl-34823605

RESUMEN

A key challenge in the field of Quantitative Structure Activity Relationships (QSAR) is how to effectively treat experimental error in the training and evaluation of computational models. It is often assumed in the field of QSAR that models cannot produce predictions which are more accurate than their training data. Additionally, it is implicitly assumed, by necessity, that data points in test sets or validation sets do not contain error, and that each data point is a population mean. This work proposes the hypothesis that QSAR models can make predictions which are more accurate than their training data and that the error-free test set assumption leads to a significant misevaluation of model performance. This work used 8 datasets with six different common QSAR endpoints, because different endpoints should have different amounts of experimental error associated with varying complexity of the measurements. Up to 15 levels of simulated Gaussian distributed random error was added to the datasets, and models were built on the error laden datasets using five different algorithms. The models were trained on the error laden data, evaluated on error-laden test sets, and evaluated on error-free test sets. The results show that for each level of added error, the RMSE for evaluation on the error free test sets was always better. The results support the hypothesis that, at least under the conditions of Gaussian distributed random error, QSAR models can make predictions which are more accurate than their training data, and that the evaluation of models on error laden test and validation sets may give a flawed measure of model performance. These results have implications for how QSAR models are evaluated, especially for disciplines where experimental error is very large, such as in computational toxicology.

14.
Environ Health Perspect ; 129(4): 47013, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33929906

RESUMEN

BACKGROUND: Humans are exposed to tens of thousands of chemical substances that need to be assessed for their potential toxicity. Acute systemic toxicity testing serves as the basis for regulatory hazard classification, labeling, and risk management. However, it is cost- and time-prohibitive to evaluate all new and existing chemicals using traditional rodent acute toxicity tests. In silico models built using existing data facilitate rapid acute toxicity predictions without using animals. OBJECTIVES: The U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) Acute Toxicity Workgroup organized an international collaboration to develop in silico models for predicting acute oral toxicity based on five different end points: Lethal Dose 50 (LD50 value, U.S. Environmental Protection Agency hazard (four) categories, Globally Harmonized System for Classification and Labeling hazard (five) categories, very toxic chemicals [LD50 (LD50≤50mg/kg)], and nontoxic chemicals (LD50>2,000mg/kg). METHODS: An acute oral toxicity data inventory for 11,992 chemicals was compiled, split into training and evaluation sets, and made available to 35 participating international research groups that submitted a total of 139 predictive models. Predictions that fell within the applicability domains of the submitted models were evaluated using external validation sets. These were then combined into consensus models to leverage strengths of individual approaches. RESULTS: The resulting consensus predictions, which leverage the collective strengths of each individual model, form the Collaborative Acute Toxicity Modeling Suite (CATMoS). CATMoS demonstrated high performance in terms of accuracy and robustness when compared with in vivo results. DISCUSSION: CATMoS is being evaluated by regulatory agencies for its utility and applicability as a potential replacement for in vivo rat acute oral toxicity studies. CATMoS predictions for more than 800,000 chemicals have been made available via the National Toxicology Program's Integrated Chemical Environment tools and data sets (ice.ntp.niehs.nih.gov). The models are also implemented in a free, standalone, open-source tool, OPERA, which allows predictions of new and untested chemicals to be made. https://doi.org/10.1289/EHP8495.


Asunto(s)
Agencias Gubernamentales , Animales , Simulación por Computador , Ratas , Pruebas de Toxicidad Aguda , Estados Unidos , United States Environmental Protection Agency
15.
Sci Data ; 7(1): 122, 2020 04 20.
Artículo en Inglés | MEDLINE | ID: mdl-32313097

RESUMEN

Time courses of compound concentrations in plasma are used in chemical safety analysis to evaluate the relationship between external administered doses and internal tissue exposures. This type of experimental data is rarely available for the thousands of non-pharmaceutical chemicals to which people may potentially be unknowingly exposed but is necessary to properly assess the risk of such exposures. In vitro assays and in silico models are often used to craft an understanding of a chemical's pharmacokinetics; however, the certainty of the quantitative application of these estimates for chemical safety evaluations cannot be determined without in vivo data for external validation. To address this need, we present a public database of chemical time-series concentration data from 567 studies in humans or test animals for 144 environmentally-relevant chemicals and their metabolites (187 analytes total). All major administration routes are incorporated, with concentrations measured in blood/plasma, tissues, and excreta. We also include calculated pharmacokinetic parameters for some studies, and a bibliography of additional source documents to support future extraction of time-series. In addition to pharmacokinetic model calibration and validation, these data may be used for analyses of differential chemical distribution across chemicals, species, doses, or routes, and for meta-analyses on pharmacokinetic studies.


Asunto(s)
Contaminantes Ambientales/farmacocinética , Animales , Humanos
16.
Comput Toxicol ; 122019 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-33426407

RESUMEN

The US Environmental Protection Agency's (EPA) Distributed Structure-Searchable Toxicity (DSSTox) database, launched publicly in 2004, currently exceeds 875 K substances spanning hundreds of lists of interest to EPA and environmental researchers. From its inception, DSSTox has focused curation efforts on resolving chemical identifier errors and conflicts in the public domain towards the goal of assigning accurate chemical structures to data and lists of importance to the environmental research and regulatory community. Accurate structure-data associations, in turn, are necessary inputs to structure-based predictive models supporting hazard and risk assessments. In 2014, the legacy, manually curated DSSTox_V1 content was migrated to a MySQL data model, with modern cheminformatics tools supporting both manual and automated curation processes to increase efficiencies. This was followed by sequential auto-loads of filtered portions of three public datasets: EPA's Substance Registry Services (SRS), the National Library of Medicine's ChemID, and PubChem. This process was constrained by a key requirement of uniquely mapped identifiers (i.e., CAS RN, name and structure) for each substance, rejecting content where any two identifiers were conflicted either within or across datasets. This rejected content highlighted the degree of conflicting, inaccurate substance-structure ID mappings in the public domain, ranging from 12% (within EPA SRS) to 49% (across ChemID and PubChem). Substances successfully added to DSSTox from each auto-load were assigned to one of five qc_levels, conveying curator confidence in each dataset. This process enabled a significant expansion of DSSTox content to provide better coverage of the chemical landscape of interest to environmental scientists, while retaining focus on the accuracy of substance-structure-data associations. Currently, DSSTox serves as the core foundation of EPA's CompTox Chemicals Dashboard [https://comptox.epa.gov/dashboard], which provides public access to DSSTox content in support of a broad range of modeling and research activities within EPA and, increasingly, across the field of computational toxicology.

17.
Environ Health Perspect ; 127(1): 14501, 2019 01.
Artículo en Inglés | MEDLINE | ID: mdl-30632786

RESUMEN

Per- and polyfluoroalkyl substances (PFASs) are a group of fluorinated substances of interest to researchers, regulators, and the public due to their widespread presence in the environment. A few PFASs have comparatively extensive amounts of human epidemiological, exposure, and experimental animal toxicity data (e.g., perfluorooctanoic acid), whereas little toxicity and exposure information exists for much of the broader set of PFASs. Given that traditional approaches to generate toxicity information are resource intensive, new approach methods, including in vitro high-throughput toxicity (HTT) testing, are being employed to inform PFAS hazard characterization and further (in vivo) testing. The U.S. Environmental Protection Agency (EPA) and the National Toxicology Program (NTP) are collaborating to develop a risk-based approach for conducting PFAS toxicity testing to facilitate PFAS human health assessments. This article describes the construction of a PFAS screening library and the process by which a targeted subset of 75 PFASs were selected. Multiple factors were considered, including interest to the U.S. EPA, compounds within targeted categories, structural diversity, exposure considerations, procurability and testability, and availability of existing toxicity data. Generating targeted HTT data for PFASs represents a new frontier for informing priority setting. https://doi.org/10.1289/EHP4555.


Asunto(s)
Fluorocarburos/química , Fluorocarburos/toxicidad , Toxicocinética , Sustancias Peligrosas/química , Sustancias Peligrosas/toxicidad , Ensayos Analíticos de Alto Rendimiento , Estructura Molecular , Estados Unidos , United States Environmental Protection Agency
18.
Toxicol Sci ; 169(2): 317-332, 2019 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-30835285

RESUMEN

The U.S. Environmental Protection Agency (EPA) is faced with the challenge of efficiently and credibly evaluating chemical safety often with limited or no available toxicity data. The expanding number of chemicals found in commerce and the environment, coupled with time and resource requirements for traditional toxicity testing and exposure characterization, continue to underscore the need for new approaches. In 2005, EPA charted a new course to address this challenge by embracing computational toxicology (CompTox) and investing in the technologies and capabilities to push the field forward. The return on this investment has been demonstrated through results and applications across a range of human and environmental health problems, as well as initial application to regulatory decision-making within programs such as the EPA's Endocrine Disruptor Screening Program. The CompTox initiative at EPA is more than a decade old. This manuscript presents a blueprint to guide the strategic and operational direction over the next 5 years. The primary goal is to obtain broader acceptance of the CompTox approaches for application to higher tier regulatory decisions, such as chemical assessments. To achieve this goal, the blueprint expands and refines the use of high-throughput and computational modeling approaches to transform the components in chemical risk assessment, while systematically addressing key challenges that have hindered progress. In addition, the blueprint outlines additional investments in cross-cutting efforts to characterize uncertainty and variability, develop software and information technology tools, provide outreach and training, and establish scientific confidence for application to different public health and environmental regulatory decisions.


Asunto(s)
Biología Computacional/métodos , Ensayos Analíticos de Alto Rendimiento/métodos , Toxicología/métodos , Toma de Decisiones , Humanos , Tecnología de la Información , Medición de Riesgo , Toxicocinética , Estados Unidos , United States Environmental Protection Agency
19.
Sci Data ; 5: 180125, 2018 07 10.
Artículo en Inglés | MEDLINE | ID: mdl-29989593

RESUMEN

Quantitative data on product chemical composition is a necessary parameter for characterizing near-field exposure. This data set comprises reported and predicted information on more than 75,000 chemicals and more than 15,000 consumer products. The data's primary intended use is for exposure, risk, and safety assessments. The data set includes specific products with quantitative or qualitative ingredient information, which has been publicly disclosed through material safety data sheets (MSDS) and ingredient lists. A single product category from a refined and harmonized set of categories has been assigned to each product. The data set also contains information on the functional role of chemicals in products, which can inform predictions of the concentrations in which they occur. These data will be useful to exposure and risk assessors evaluating chemical and product safety.


Asunto(s)
Seguridad de Productos para el Consumidor , Bases de Datos Factuales , Compuestos Inorgánicos , Compuestos Orgánicos , Exposición a Riesgos Ambientales , Productos Domésticos , Materiales Manufacturados
20.
Sci Total Environ ; 636: 901-909, 2018 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-29729507

RESUMEN

The structures and physicochemical properties of chemicals are important for determining their potential toxicological effects, toxicokinetics, and route(s) of exposure. These data are needed to prioritize the risk for thousands of environmental chemicals, but experimental values are often lacking. In an attempt to efficiently fill data gaps in physicochemical property information, we generated new data for 200 structurally diverse compounds, which were rigorously selected from the USEPA ToxCast chemical library, and whose structures are available within the Distributed Structure-Searchable Toxicity Database (DSSTox). This pilot study evaluated rapid experimental methods to determine five physicochemical properties, including the log of the octanol:water partition coefficient (known as log(Kow) or logP), vapor pressure, water solubility, Henry's law constant, and the acid dissociation constant (pKa). For most compounds, experiments were successful for at least one property; log(Kow) yielded the largest return (176 values). It was determined that 77 ToxPrint structural features were enriched in chemicals with at least one measurement failure, indicating which features may have played a role in rapid method failures. To gauge consistency with traditional measurement methods, the new measurements were compared with previous measurements (where available). Since quantitative structure-activity/property relationship (QSAR/QSPR) models are used to fill gaps in physicochemical property information, 5 suites of QSPRs were evaluated for their predictive ability and chemical coverage or applicability domain of new experimental measurements. The ability to have accurate measurements of these properties will facilitate better exposure predictions in two ways: 1) direct input of these experimental measurements into exposure models; and 2) construction of QSPRs with a wider applicability domain, as their predicted physicochemical values can be used to parameterize exposure models in the absence of experimental data.


Asunto(s)
Modelos Químicos , Proyectos Piloto , Relación Estructura-Actividad Cuantitativa , Solubilidad , Estados Unidos , United States Environmental Protection Agency , Agua
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA