Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Chem Res Toxicol ; 36(3): 465-478, 2023 03 20.
Artigo em Inglês | MEDLINE | ID: mdl-36877669

RESUMO

The need for careful assembly, training, and validation of quantitative structure-activity/property models (QSAR/QSPR) is more significant than ever as data sets become larger and sophisticated machine learning tools become increasingly ubiquitous and accessible to the scientific community. Regulatory agencies such as the United States Environmental Protection Agency must carefully scrutinize each aspect of a resulting QSAR/QSPR model to determine its potential use in environmental exposure and hazard assessment. Herein, we revisit the goals of the Organisation for Economic Cooperation and Development (OECD) in our application and discuss the validation principles for structure-activity models. We apply these principles to a model for predicting water solubility of organic compounds derived using random forest regression, a common machine learning approach in the QSA/PR literature. Using public sources, we carefully assembled and curated a data set consisting of 10,200 unique chemical structures with associated water solubility measurements. This data set was then used as a focal narrative to methodically consider the OECD's QSA/PR principles and how they can be applied to random forests. Despite some expert, mechanistically informed supervision of descriptor selection to enhance model interpretability, we achieved a model of water solubility with comparable performance to previously published models (5-fold cross validated performance 0.81 R2 and 0.98 RMSE). We hope this work will catalyze a necessary conversation around the importance of cautiously modernizing and explicitly leveraging OECD principles while pursuing state-of-the-art machine learning approaches to derive QSA/PR models suitable for regulatory consideration.


Assuntos
Organização para a Cooperação e Desenvolvimento Econômico , Relação Quantitativa Estrutura-Atividade , Solubilidade , Algoritmos , Água/química
2.
Kidney Blood Press Res ; 41(3): 278-87, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27160585

RESUMO

BACKGROUND/AIMS: Clinical benefits of percutaneous treatment of renal artery stenosis (RAS) remain controversial. The aim of this study was to evaluate the effects of renal artery stenting on kidney function and blood pressure (BP) control in the log-term follow-up. Additionally angiographic follow up was performed in selected subgroup of patients. METHODS: The study was designed as international registry of 265 consecutive patients with RAS treated with renal artery stenting. The primary end-point of the study was the change in renal function and blood pressure at long-term follow-up as compared with baseline values. Evaluation of the renal function was based on estimated glomerular filtration rate (eGFR) with the use of the modification of diet in renal disease (MDRD) formula. RESULTS: All patients had clinical follow-up at the median time of 23.8 (interquartile range: 3-90) months during ambulatory visits. At follow-up eGFR improved in 53,9% of patients. These patients had lower pre-procedural systolic BP, more severe lesion type at baseline and lower diameter stenosis in control angiography. At follow up visits, SBP improvement was observed in 77,4% of patients. The average number of anti-hypertensive medications before the procedure and at follow up did not change significantly (2,70±1,0 vs 2,49±0,9, p=0,1). Restenosis rate based on control angiography performed at median time of 15 months was 12%. CONCLUSION: The results of the study suggest that interventional treatment of RAS may preserve renal function and improve blood pressure control at long-term follow-up.


Assuntos
Pressão Sanguínea , Rim/fisiologia , Obstrução da Artéria Renal/terapia , Stents , Seguimentos , Humanos , Artéria Renal/cirurgia
3.
J Cheminform ; 16(1): 19, 2024 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-38378618

RESUMO

The rapid increase of publicly available chemical structures and associated experimental data presents a valuable opportunity to build robust QSAR models for applications in different fields. However, the common concern is the quality of both the chemical structure information and associated experimental data. This is especially true when those data are collected from multiple sources as chemical substance mappings can contain many duplicate structures and molecular inconsistencies. Such issues can impact the resulting molecular descriptors and their mappings to experimental data and, subsequently, the quality of the derived models in terms of accuracy, repeatability, and reliability. Herein we describe the development of an automated workflow to standardize chemical structures according to a set of standard rules and generate two and/or three-dimensional "QSAR-ready" forms prior to the calculation of molecular descriptors. The workflow was designed in the KNIME workflow environment and consists of three high-level steps. First, a structure encoding is read, and then the resulting in-memory representation is cross-referenced with any existing identifiers for consistency. Finally, the structure is standardized using a series of operations including desalting, stripping of stereochemistry (for two-dimensional structures), standardization of tautomers and nitro groups, valence correction, neutralization when possible, and then removal of duplicates. This workflow was initially developed to support collaborative modeling QSAR projects to ensure consistency of the results from the different participants. It was then updated and generalized for other modeling applications. This included modification of the "QSAR-ready" workflow to generate "MS-ready structures" to support the generation of substance mappings and searches for software applications related to non-targeted analysis mass spectrometry. Both QSAR and MS-ready workflows are freely available in KNIME, via standalone versions on GitHub, and as docker container resources for the scientific community. Scientific contribution: This work pioneers an automated workflow in KNIME, systematically standardizing chemical structures to ensure their readiness for QSAR modeling and broader scientific applications. By addressing data quality concerns through desalting, stereochemistry stripping, and normalization, it optimizes molecular descriptors' accuracy and reliability. The freely available resources in KNIME, GitHub, and docker containers democratize access, benefiting collaborative research and advancing diverse modeling endeavors in chemistry and mass spectrometry.

4.
Artigo em Inglês | MEDLINE | ID: mdl-23534394

RESUMO

Using a dataset with more than 6000 compounds, the performance of eight quantitative structure activity relationships (QSAR) models was evaluated: ACD/Tox Suite, Absorption, Distribution, Metabolism, Elimination, and Toxicity of chemical substances (ADMET) predictor, Derek, Toxicity Estimation Software Tool (T.E.S.T.), TOxicity Prediction by Komputer Assisted Technology (TOPKAT), Toxtree, CEASAR, and SARpy (SAR in python). In general, the results showed a high level of performance. To have a realistic estimate of the predictive ability, the results for chemicals inside and outside the training set for each model were considered. The effect of applicability domain tools (when available) on the prediction accuracy was also evaluated. The predictive tools included QSAR models, knowledge-based systems, and a combination of both methods. Models based on statistical QSAR methods gave better results.


Assuntos
Simulação por Computador , Substâncias Perigosas/toxicidade , Modelos Químicos , Mutagênicos/química , Mutagênicos/toxicidade , Mutagênese , Testes de Mutagenicidade , Valor Preditivo dos Testes , Relação Quantitativa Estrutura-Atividade , Software
5.
J Chem Inf Model ; 53(9): 2229-39, 2013 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-23962299

RESUMO

The ability to determine the mode of action (MOA) for a diverse group of chemicals is a critical part of ecological risk assessment and chemical regulation. However, existing MOA assignment approaches in ecotoxicology have been limited to a relatively few MOAs, have high uncertainty, or rely on professional judgment. In this study, machine based learning algorithms (linear discriminant analysis and random forest) were used to develop models for assigning aquatic toxicity MOA. These methods were selected since they have been shown to be able to correlate diverse data sets and provide an indication of the most important descriptors. A data set of MOA assignments for 924 chemicals was developed using a combination of high confidence assignments, international consensus classifications, ASTER (ASessment Tools for the Evaluation of Risk) predictions, and weight of evidence professional judgment based an assessment of structure and literature information. The overall data set was randomly divided into a training set (75%) and a validation set (25%) and then used to develop linear discriminant analysis (LDA) and random forest (RF) MOA assignment models. The LDA and RF models had high internal concordance and specificity and were able to produce overall prediction accuracies ranging from 84.5 to 87.7% for the validation set. These results demonstrate that computational chemistry approaches can be used to determine the acute toxicity MOAs across a large range of structures and mechanisms.


Assuntos
Organismos Aquáticos/efeitos dos fármacos , Biologia Computacional/métodos , Testes de Toxicidade , Análise Discriminante , Relação Quantitativa Estrutura-Atividade , Reprodutibilidade dos Testes
6.
ACS Sustain Chem Eng ; 11: 7986-7996, 2023 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-37476647

RESUMO

One type of firefighting foam, referred to as aqueous filmforming foams (AFFF), is known to contain per- and polyfluoroalkyl substances (PFAS). The concerns raised with PFAS, and their potential environmental and health impacts, have led to a surge in research on fluorine-free alternatives both in the United States and globally. Particularly, in January 2023, a new military specification (MIL-PRF-32725) for fluorine-free foam was released in accordance with Congressional requirements for the U.S. Department of Defense. This paper provides a critical analysis of the present state of the various fluorine-free options that have been developed to date. A nuanced perspective of the challenges and opportunities of more sustainable replacements is explored by examining the performance, cost, and regulatory considerations associated with these fluorine-free alternatives. Ultimately, this evaluation shows that the transition to fluorine-free replacements is likely to be complex and multifaceted, requiring careful consideration of the trade-offs involved. Yet, the ongoing work will provide valuable insights for future research on alternatives to AFFF and enhancing the safety and sustainability of fire suppression systems.

7.
J Chem Inf Model ; 52(10): 2570-8, 2012 Oct 22.
Artigo em Inglês | MEDLINE | ID: mdl-23030316

RESUMO

Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.


Assuntos
Algoritmos , Produtos Biológicos/química , Relação Quantitativa Estrutura-Atividade , Animais , Produtos Biológicos/farmacologia , Cyprinidae/crescimento & desenvolvimento , Bases de Dados Factuais , Descoberta de Drogas , Concentração Inibidora 50 , Dose Letal Mediana , Modelos Moleculares , Ratos , Reprodutibilidade dos Testes , Tetrahymena pyriformis/efeitos dos fármacos , Tetrahymena pyriformis/crescimento & desenvolvimento , Estudos de Validação como Assunto
8.
J Solution Chem ; 51: 838-849, 2022 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-35967985

RESUMO

A solvent has many different types of impact on the environment. This article describes a method that combines several different types of impacts together into one environmental index so that similar solvents may be compared by their cumulative impact to the environment. The software tool PARIS III (Program for Assisting the Replacement of Industrial Solvents III) initially finds thousands of solvents mixtures with behaviors as close as possible to those of the original solvent entered. The overall environmental impacts of these solvent mixtures are estimated and assigned to environmental indexes. Users of the software tool can then choose replacements for the original solvent with similar activities but with significantly smaller environmental indexes. These solvent mixtures may act as practical substitutes for the industrial solvents but substantially reduce the overall environmental impact of the original harmful solvents. Potential replacements like this are found for three of the U.S. Environmental Protection Agency's Toxic Release Inventory solvents, carbon tetrachloride, toluene, and N-methylpyrrolidone.

9.
Chemosphere ; 287(Pt 1): 131845, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34523441

RESUMO

"Green" pyrotechnics seek to remove known environmental pollutants and health hazards from their formulations. This chemical engineering approach often focuses on maintaining performance effects upon replacement of objectionable ingredients, yet neglects the chemical products formed by the exothermic reaction. In this work, milligram quantities of a lab-scale pyrotechnic red smoke composition were functioned within a thermal probe for product identification by pyrolysis-gas chromatography-mass spectrometry. Thermally decomposed ingredients and new side product derivatives were identified at lower relative abundances to the intact organic dye (as the engineered sublimation product). Side products included chlorination of the organic dye donated by the chlorate oxidizer. Machine learning quantitative structure-activity relationship models computed impacts to health and environmental hazards. High to very high toxicities were predicted for inhalation, mutagenicity, developmental, and endocrine disruption for common military pyrotechnic dyes and their analogous chlorinated side products. These results underscore the need to revise objectives of "green" pyrotechnic engineering.


Assuntos
Corantes , Fumaça , Antraquinonas/toxicidade , Corantes/toxicidade , Mutagênicos , Nicotiana
10.
J Air Waste Manag Assoc ; 72(6): 540-555, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-34905459

RESUMO

The release of persistent per- and polyfluoroalkyl substances (PFAS) into the environment is a major concern for the United States Environmental Protection Agency (U.S. EPA). To complement its ongoing research efforts addressing PFAS contamination, the U.S. EPA's Office of Research and Development (ORD) commissioned the PFAS Innovative Treatment Team (PITT) to provide new perspectives on treatment and disposal of high priority PFAS-containing wastes. During its six-month tenure, the team was charged with identifying and developing promising solutions to destroy PFAS. The PITT examined emerging technologies for PFAS waste treatment and selected four technologies for further investigation. These technologies included mechanochemical treatment, electrochemical oxidation, gasification and pyrolysis, and supercritical water oxidation. This paper highlights these four technologies and discusses their prospects and the development needed before potentially becoming available solutions to address PFAS-contaminated waste.Implications: This paper examines four novel, non-combustion technologies or applications for the treatment of persistent per- and polyfluoroalkyl substances (PFAS) wastes. These technologies are introduced to the reader along with their current state of development and areas for further development. This information will be useful for developers, policy makers, and facility managers that are facing increasing issues with disposal of PFAS wastes.


Assuntos
Fluorocarbonos , Poluentes Químicos da Água , Fluorocarbonos/análise , Estados Unidos , United States Environmental Protection Agency , Poluentes Químicos da Água/análise
11.
Nature ; 433(7023): E6-7; discussion E7-8, 2005 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-15662371

RESUMO

Plotkin et al. introduce a method to detect selection that is based on an index called codon volatility and that uses only the sequence of a single genome, claiming that this method is applicable to a large range of sequenced organisms. Volatility for a given codon is the ratio of non-synonymous codons to all sense codons accessible by one point mutation. The significance of each gene's volatility is assessed by comparison with a simulated distribution of 10(6) synonymous versions of each gene, with synonymous codons drawn randomly from average genome frequencies. Here we re-examine their method and data and find that codon volatility does not detect selection, and that, even if it did, the genomes of Mycobacterium tuberculosis and Plasmodium falciparum, as well as those of most sequenced organisms, do not meet the assumptions necessary for application of their method.


Assuntos
Evolução Biológica , Códon/genética , Genômica/métodos , Seleção Genética , Animais , Genes/genética , Genoma , Modelos Genéticos , Mutação de Sentido Incorreto/genética , Mycobacterium tuberculosis/genética , Plasmodium falciparum/genética , Reprodutibilidade dos Testes , Serina/genética
12.
Comput Toxicol ; 182021 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-34504984

RESUMO

Regulatory agencies world-wide face the challenge of performing risk-based prioritization of thousands of substances in commerce. In this study, a major effort was undertaken to compile a large genotoxicity dataset (54,805 records for 9299 substances) from several public sources (e.g., TOXNET, COSMOS, eChemPortal). The names and outcomes of the different assays were harmonized, and assays were annotated by type: gene mutation in Salmonella bacteria (Ames assay) and chromosome mutation (clastogenicity) in vitro or in vivo (chromosome aberration, micronucleus, and mouse lymphoma Tk +/- assays). This dataset was then evaluated to assess genotoxic potential using a categorization scheme, whereby a substance was considered genotoxic if it was positive in at least one Ames or clastogen study. The categorization dataset comprised 8442 chemicals, of which 2728 chemicals were genotoxic, 5585 were not and 129 were inconclusive. QSAR models (TEST and VEGA) and the OECD Toolbox structural alerts/profilers (e.g., OASIS DNA alerts for Ames and chromosomal aberrations) were used to make in silico predictions of genotoxicity potential. The performance of the individual QSAR tools and structural alerts resulted in balanced accuracies of 57-73%. A Naïve Bayes consensus model was developed using combinations of QSAR models and structural alert predictions. The 'best' consensus model selected had a balanced accuracy of 81.2%, a sensitivity of 87.24% and a specificity of 75.20%. This in silico scheme offers promise as a first step in ranking thousands of substances as part of a prioritization approach for genotoxicity.

13.
Comput Toxicol ; 20: 1-100185, 2021 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-35128218

RESUMO

The Toxic Substances Control Act (TSCA) became law in the U.S. in 1976 and was amended in 2016. The amended law requires the U.S. EPA to perform risk-based evaluations of existing chemicals. Here, we developed a tiered approach to screen potential candidates based on their genotoxicity and carcinogenicity information to inform the selection of candidate chemicals for prioritization under TSCA. The approach was underpinned by a large database of carcinogenicity and genotoxicity information that had been compiled from various public sources. Carcinogenicity data included weight-of-evidence human carcinogenicity evaluations and animal cancer data. Genotoxicity data included bacterial gene mutation data from the Salmonella (Ames) and Escherichia coli WP2 assays and chromosomal mutation (clastogenicity) data. Additionally, Ames and clastogenicity outcomes were predicted using the alert schemes within the OECD QSAR Toolbox and the Toxicity Estimation Software Tool (TEST). The evaluation workflows for carcinogenicity and genotoxicity were developed along with associated scoring schemes to make an overall outcome determination. For this case study, two sets of chemicals, the TSCA Active Inventory non-confidential portion list available on the EPA CompTox Chemicals Dashboard (33,364 chemicals, 'TSCA Active List') and a representative proof-of-concept (POC) set of 238 chemicals were profiled through the two workflows to make determinations of carcinogenicity and genotoxicity potential. Of the 33,364 substances on the 'TSCA Active List', overall calls could be made for 20,371 substances. Here 46.67%% (9507) of substances were non-genotoxic, 0.5% (103) were scored as inconclusive, 43.93% (8949) were predicted genotoxic and 8.9% (1812) were genotoxic. Overall calls for genotoxicity could be made for 225 of the 238 POC chemicals. Of these, 40.44% (91) were non-genotoxic, 2.67% (6) were inconclusive, 6.22% (14) were predicted genotoxic, and 50.67% (114) genotoxic. The approach shows promise as a means to identify potential candidates for prioritization from a genotoxicity and carcinogenicity perspective.

14.
Environ Health Perspect ; 129(4): 47013, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33929906

RESUMO

BACKGROUND: Humans are exposed to tens of thousands of chemical substances that need to be assessed for their potential toxicity. Acute systemic toxicity testing serves as the basis for regulatory hazard classification, labeling, and risk management. However, it is cost- and time-prohibitive to evaluate all new and existing chemicals using traditional rodent acute toxicity tests. In silico models built using existing data facilitate rapid acute toxicity predictions without using animals. OBJECTIVES: The U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) Acute Toxicity Workgroup organized an international collaboration to develop in silico models for predicting acute oral toxicity based on five different end points: Lethal Dose 50 (LD50 value, U.S. Environmental Protection Agency hazard (four) categories, Globally Harmonized System for Classification and Labeling hazard (five) categories, very toxic chemicals [LD50 (LD50≤50mg/kg)], and nontoxic chemicals (LD50>2,000mg/kg). METHODS: An acute oral toxicity data inventory for 11,992 chemicals was compiled, split into training and evaluation sets, and made available to 35 participating international research groups that submitted a total of 139 predictive models. Predictions that fell within the applicability domains of the submitted models were evaluated using external validation sets. These were then combined into consensus models to leverage strengths of individual approaches. RESULTS: The resulting consensus predictions, which leverage the collective strengths of each individual model, form the Collaborative Acute Toxicity Modeling Suite (CATMoS). CATMoS demonstrated high performance in terms of accuracy and robustness when compared with in vivo results. DISCUSSION: CATMoS is being evaluated by regulatory agencies for its utility and applicability as a potential replacement for in vivo rat acute oral toxicity studies. CATMoS predictions for more than 800,000 chemicals have been made available via the National Toxicology Program's Integrated Chemical Environment tools and data sets (ice.ntp.niehs.nih.gov). The models are also implemented in a free, standalone, open-source tool, OPERA, which allows predictions of new and untested chemicals to be made. https://doi.org/10.1289/EHP8495.


Assuntos
Órgãos Governamentais , Animais , Simulação por Computador , Ratos , Testes de Toxicidade Aguda , Estados Unidos , United States Environmental Protection Agency
15.
J Chem Inf Model ; 50(12): 2094-111, 2010 Dec 27.
Artigo em Inglês | MEDLINE | ID: mdl-21033656

RESUMO

The estimation of accuracy and applicability of QSAR and QSPR models for biological and physicochemical properties represents a critical problem. The developed parameter of "distance to model" (DM) is defined as a metric of similarity between the training and test set compounds that have been subjected to QSAR/QSPR modeling. In our previous work, we demonstrated the utility and optimal performance of DM metrics that have been based on the standard deviation within an ensemble of QSAR models. The current study applies such analysis to 30 QSAR models for the Ames mutagenicity data set that were previously reported within the 2009 QSAR challenge. We demonstrate that the DMs based on an ensemble (consensus) model provide systematically better performance than other DMs. The presented approach identifies 30-60% of compounds having an accuracy of prediction similar to the interlaboratory accuracy of the Ames test, which is estimated to be 90%. Thus, the in silico predictions can be used to halve the cost of experimental measurements by providing a similar prediction accuracy. The developed model has been made publicly available at http://ochem.eu/models/1 .


Assuntos
Benchmarking/métodos , Classificação/métodos , Testes de Mutagenicidade/métodos , Relação Quantitativa Estrutura-Atividade , Testes de Mutagenicidade/normas , Análise de Componente Principal
16.
Clean Technol Environ Policy ; 22(2): 441-458, 2020 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-33867908

RESUMO

Comparative chemical hazard assessment, which compares hazards for several endpoints across several chemicals, can be used for a variety of purposes including alternatives assessment and the prioritization of chemicals for further assessment. A new framework was developed to compile and integrate chemical hazard data for several human health and ecotoxicity endpoints from public online sources including hazardous chemical lists, Globally Harmonized System hazard codes (H-codes) or hazard categories from government health agencies, experimental quantitative toxicity values, and predicted values using Quantitative Structure-Activity Relationship (QSAR) models. QSAR model predictions were obtained using EPA's Toxicity Estimation Software Tool. Java programming was used to download hazard data, convert data from each source into a consistent score record format, and store the data in a database. Scoring criteria based on the EPA's Design for the Environment Program Alternatives Assessment Criteria for Hazard Evaluation were used to determine ordinal hazard scores (i.e., low, medium, high, or very high) for each score record. Different methodologies were assessed for integrating data from multiple sources into one score for each hazard endpoint for each chemical. The chemical hazard assessment (CHA) Database developed in this study currently contains more than 990,000 score records for more than 85,000 chemicals. The CHA Database and the methods used in its development may contribute to several cheminformatics, public health, and environmental activities.

17.
Environ Prog Sustain Energy ; 39(1): 1-13331, 2020 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-32832013

RESUMO

PARIS III (Program for Assisting the Replacement of Industrial Solvents III, Version 1.4.0) is a pollution prevention solvent substitution software tool used to find mixtures of solvents that are less harmful to the environment than the industrial solvents to be replaced. By searching extensively though hundreds of millions of possible solvent combinations, mixtures that perform the same as the original solvents may be found. Greener solvent substitutes may then be chosen from those mixtures that behave similarly but have less environmental impact. These extensive searches may be enhanced by fine-tuning impact weighting factors to better reflect regional environmental concerns; and by adjusting how close the properties of the replacement must be to those of the original solvent. Optimal replacements can then be compared again and selected for better performance, but less environmental impact. This method can be a very effective way of finding greener replacements for harmful solvents used by industry.

18.
Environ Health Perspect ; 128(2): 27002, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-32074470

RESUMO

BACKGROUND: Endocrine disrupting chemicals (EDCs) are xenobiotics that mimic the interaction of natural hormones and alter synthesis, transport, or metabolic pathways. The prospect of EDCs causing adverse health effects in humans and wildlife has led to the development of scientific and regulatory approaches for evaluating bioactivity. This need is being addressed using high-throughput screening (HTS) in vitro approaches and computational modeling. OBJECTIVES: In support of the Endocrine Disruptor Screening Program, the U.S. Environmental Protection Agency (EPA) led two worldwide consortiums to virtually screen chemicals for their potential estrogenic and androgenic activities. Here, we describe the Collaborative Modeling Project for Androgen Receptor Activity (CoMPARA) efforts, which follows the steps of the Collaborative Estrogen Receptor Activity Prediction Project (CERAPP). METHODS: The CoMPARA list of screened chemicals built on CERAPP's list of 32,464 chemicals to include additional chemicals of interest, as well as simulated ToxCast™ metabolites, totaling 55,450 chemical structures. Computational toxicology scientists from 25 international groups contributed 91 predictive models for binding, agonist, and antagonist activity predictions. Models were underpinned by a common training set of 1,746 chemicals compiled from a combined data set of 11 ToxCast™/Tox21 HTS in vitro assays. RESULTS: The resulting models were evaluated using curated literature data extracted from different sources. To overcome the limitations of single-model approaches, CoMPARA predictions were combined into consensus models that provided averaged predictive accuracy of approximately 80% for the evaluation set. DISCUSSION: The strengths and limitations of the consensus predictions were discussed with example chemicals; then, the models were implemented into the free and open-source OPERA application to enable screening of new chemicals with a defined applicability domain and accuracy assessment. This implementation was used to screen the entire EPA DSSTox database of ∼875,000 chemicals, and their predicted AR activities have been made available on the EPA CompTox Chemicals dashboard and National Toxicology Program's Integrated Chemical Environment. https://doi.org/10.1289/EHP5580.


Assuntos
Simulação por Computador , Disruptores Endócrinos , Androgênios , Bases de Dados Factuais , Ensaios de Triagem em Larga Escala , Humanos , Receptores Androgênicos , Estados Unidos , United States Environmental Protection Agency
19.
Chem Res Toxicol ; 22(12): 1913-21, 2009 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-19845371

RESUMO

Few quantitative structure-activity relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity end points. In this study, a comprehensive data set of 7385 compounds with their most conservative lethal dose (LD(50)) values has been compiled. A combinatorial QSAR approach has been employed to develop robust and predictive models of acute toxicity in rats caused by oral exposure to chemicals. To enable fair comparison between the predictive power of models generated in this study versus a commercial toxicity predictor, TOPKAT (Toxicity Prediction by Komputer Assisted Technology), a modeling subset of the entire data set was selected that included all 3472 compounds used in TOPKAT's training set. The remaining 3913 compounds, which were not present in the TOPKAT training set, were used as the external validation set. QSAR models of five different types were developed for the modeling set. The prediction accuracy for the external validation set was estimated by determination coefficient R(2) of linear regression between actual and predicted LD(50) values. The use of the applicability domain threshold implemented in most models generally improved the external prediction accuracy but expectedly led to the decrease in chemical space coverage; depending on the applicability domain threshold, R(2) ranged from 0.24 to 0.70. Ultimately, several consensus models were developed by averaging the predicted LD(50) for every compound using all five models. The consensus models afforded higher prediction accuracy for the external validation data set with the higher coverage as compared to individual constituent models. The validated consensus LD(50) models developed in this study can be used as reliable computational predictors of in vivo acute toxicity.


Assuntos
Relação Quantitativa Estrutura-Atividade , Testes de Toxicidade Aguda , Administração Oral , Animais , Dose Letal Mediana , Modelos Teóricos , Compostos Orgânicos/química , Compostos Orgânicos/toxicidade , Ratos
20.
J Thromb Thrombolysis ; 28(2): 224-8, 2009 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-19291367

RESUMO

OBJECTIVE: We evaluated the early pharmacodynamic profile of the combined 30 mg intravenous and 1 mg/kg subcutaneous enoxaparin loading utilized in the TIMI 11B and ExTRACT TIMI 25 trials. BACKGROUND: It has not been reported whether anti-Xa levels appropriate for percutaneous coronary intervention (PCI) can be reliably achieved within 2 h utilizing this regimen. METHODS: Twenty-six patients with acute coronary syndrome (ACS) treated with this regimen had anti-Xa levels measured at 5 min, 2, 4, 6 and 8 h. RESULTS: Seventy-six percent of patients had anti-Xa levels above 0.5 IU/ml at 5 min. Dose-response curves showed all patients to have anti-Xa levels above 0.5 IU/ml within 1 h. Anti-Xa remained in the targeted range for PCI (0.5 to 1.8 IU/ml) at 2, 4, 6 and 8 h in all patients. CONCLUSION: This regimen is well suited for ACS treatment with an invasive strategy, including the rapid transition to early and rescue PCI.


Assuntos
Angioplastia Coronária com Balão , Anticoagulantes/farmacocinética , Enoxaparina/farmacocinética , Inibidores do Fator Xa , Síndrome Coronariana Aguda/terapia , Idoso , Anticoagulantes/administração & dosagem , Enoxaparina/administração & dosagem , Feminino , Humanos , Injeções Intravenosas , Injeções Subcutâneas , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA