Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 51
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Regul Toxicol Pharmacol ; 149: 105614, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38574841

RESUMO

The United States Environmental Protection Agency (USEPA) uses the lethal dose 50% (LD50) value from in vivo rat acute oral toxicity studies for pesticide product label precautionary statements and environmental risk assessment (RA). The Collaborative Acute Toxicity Modeling Suite (CATMoS) is a quantitative structure-activity relationship (QSAR)-based in silico approach to predict rat acute oral toxicity that has the potential to reduce animal use when registering a new pesticide technical grade active ingredient (TGAI). This analysis compared LD50 values predicted by CATMoS to empirical values from in vivo studies for the TGAIs of 177 conventional pesticides. The accuracy and reliability of the model predictions were assessed relative to the empirical data in terms of USEPA acute oral toxicity categories and discrete LD50 values for each chemical. CATMoS was most reliable at placing pesticide TGAIs in acute toxicity categories III (>500-5000 mg/kg) and IV (>5000 mg/kg), with 88% categorical concordance for 165 chemicals with empirical in vivo LD50 values ≥ 500 mg/kg. When considering an LD50 for RA, CATMoS predictions of 2000 mg/kg and higher were found to agree with empirical values from limit tests (i.e., single, high-dose tests) or definitive results over 2000 mg/kg with few exceptions.


Assuntos
Simulação por Computador , Praguicidas , Relação Quantitativa Estrutura-Atividade , Testes de Toxicidade Aguda , United States Environmental Protection Agency , Animais , Medição de Risco , Praguicidas/toxicidade , Dose Letal Mediana , Ratos , Administração Oral , Testes de Toxicidade Aguda/métodos , Estados Unidos , Reprodutibilidade dos Testes
2.
Nucleic Acids Res ; 48(W1): W586-W590, 2020 07 02.
Artigo em Inglês | MEDLINE | ID: mdl-32421835

RESUMO

High-throughput screening (HTS) research programs for drug development or chemical hazard assessment are designed to screen thousands of molecules across hundreds of biological targets or pathways. Most HTS platforms use fluorescence and luminescence technologies, representing more than 70% of the assays in the US Tox21 research consortium. These technologies are subject to interferent signals largely explained by chemicals interacting with light spectrum. This phenomenon results in up to 5-10% of false positive results, depending on the chemical library used. Here, we present the InterPred webserver (version 1.0), a platform to predict such interference chemicals based on the first large-scale chemical screening effort to directly characterize chemical-assay interference, using assays in the Tox21 portfolio specifically designed to measure autofluorescence and luciferase inhibition. InterPred combines 17 quantitative structure activity relationship (QSAR) models built using optimized machine learning techniques and allows users to predict the probability that a new chemical will interfere with different combinations of cellular and technology conditions. InterPred models have been applied to the entire Distributed Structure-Searchable Toxicity (DSSTox) Database (∼800,000 chemicals). The InterPred webserver is available at https://sandbox.ntp.niehs.nih.gov/interferences/.


Assuntos
Ensaios de Triagem em Larga Escala , Software , Artefatos , Fluorescência , Internet , Aprendizado de Máquina , Preparações Farmacêuticas/química , Relação Quantitativa Estrutura-Atividade , Fluxo de Trabalho
3.
Toxicol Appl Pharmacol ; 387: 114774, 2020 01 15.
Artigo em Inglês | MEDLINE | ID: mdl-31783037

RESUMO

Chemical risk assessment relies on toxicity tests that require significant numbers of animals, time and costs. For the >30,000 chemicals in commerce, the current scale of animal testing is insufficient to address chemical safety concerns as regulatory and product stewardship considerations evolve to require more comprehensive understanding of potential biological effects, conditions of use, and associated exposures. We demonstrate the use of a multi-level new approach methodology (NAMs) strategy for hazard- and risk-based prioritization to reduce animal testing. A Level 1/2 chemical prioritization based on estrogen receptor (ER) activity and metabolic activation using ToxCast data was used to select 112 chemicals for testing in a Level 3 human uterine cell estrogen response assay (IKA assay). The Level 3 data were coupled with quantitative in vitro to in vivo extrapolation (Q-IVIVE) to support bioactivity determination (as a surrogate for hazard) in a tissue-specific context. Assay AC50s and Q-IVIVE were used to estimate human equivalent doses (HEDs), and HEDs were compared to rodent uterotrophic assay in vivo-derived points of departure (PODs). For substances active both in vitro and in vivo, IKA assay-derived HEDs were lower or equivalent to in vivo PODs for 19/23 compounds (83%). Activity exposure relationships were calculated, and the IKA assay was as or more protective of human health than the rodent uterotrophic assay for all IKA-positive compounds. This study demonstrates the utility of biologically relevant fit-for-purpose assays and supports the use of a multi-level strategy for chemical risk assessment.


Assuntos
Alternativas ao Uso de Animais/métodos , Disruptores Endócrinos/toxicidade , Ensaios de Triagem em Larga Escala/métodos , Testes de Toxicidade/métodos , Útero/efeitos dos fármacos , Animais , Bioensaio/métodos , Técnicas de Cultura de Células , Linhagem Celular Tumoral , Proliferação de Células/efeitos dos fármacos , Simulação por Computador , Estudos de Viabilidade , Feminino , Humanos , Modelos Biológicos , Ratos , Medição de Risco/métodos , Útero/citologia
4.
Regul Toxicol Pharmacol ; 117: 104764, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-32798611

RESUMO

Screening certain environmental chemicals for their ability to interact with endocrine targets, including the androgen receptor (AR), is an important global concern. We previously developed a model using a battery of eleven in vitro AR assays to predict in vivo AR activity. Here we describe a revised mathematical modeling approach that also incorporates data from newly available assays and demonstrate that subsets of assays can provide close to the same level of predictivity. These subset models are evaluated against the full model using 1820 chemicals, as well as in vitro and in vivo reference chemicals from the literature. Agonist batteries of as few as six assays and antagonist batteries of as few as five assays can yield balanced accuracies of 95% or better relative to the full model. Balanced accuracy for predicting reference chemicals is 100%. An approach is outlined for researchers to develop their own subset batteries to accurately detect AR activity using assays that map to the pathway of key molecular and cellular events involved in chemical-mediated AR activation and transcriptional activity. This work indicates in vitro bioactivity and in silico predictions that map to the AR pathway could be used in an integrated approach to testing and assessment for identifying chemicals that interact directly with the mammalian AR.


Assuntos
Antagonistas de Receptores de Andrógenos/toxicidade , Androgênios/toxicidade , Substâncias Perigosas/toxicidade , Modelos Teóricos , Receptores Androgênicos , Antagonistas de Receptores de Andrógenos/metabolismo , Androgênios/metabolismo , Animais , Exposição Ambiental/prevenção & controle , Exposição Ambiental/estatística & dados numéricos , Substâncias Perigosas/metabolismo , Ensaios de Triagem em Larga Escala/métodos , Humanos , Receptores Androgênicos/metabolismo
5.
Anal Bioanal Chem ; 411(4): 853-866, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-30519961

RESUMO

In August 2015, the US Environmental Protection Agency (EPA) convened a workshop entitled "Advancing non-targeted analyses of xenobiotic chemicals in environmental and biological media." The purpose of the workshop was to bring together the foremost experts in non-targeted analysis (NTA) to discuss the state-of-the-science for generating, interpreting, and exchanging NTA measurement data. During the workshop, participants discussed potential designs for a collaborative project that would use EPA resources, including the ToxCast library of chemical substances, the DSSTox database, and the CompTox Chemicals Dashboard, to evaluate cutting-edge NTA methods. That discussion was the genesis of EPA's Non-Targeted Analysis Collaborative Trial (ENTACT). Nearly 30 laboratories have enrolled in ENTACT and used a variety of chromatography, mass spectrometry, and data processing approaches to characterize ten synthetic chemical mixtures, three standardized media (human serum, house dust, and silicone band) extracts, and thousands of individual substances. Initial results show that nearly all participants have detected and reported more compounds in the mixtures than were intentionally added, with large inter-lab variability in the number of reported compounds. A comparison of gas and liquid chromatography results shows that the majority (45.3%) of correctly identified compounds were detected by only one method and 15.4% of compounds were not identified. Finally, a limited set of true positive identifications indicates substantial differences in observable chemical space when employing disparate separation and ionization techniques as part of NTA workflows. This article describes the genesis of ENTACT, all study methods and materials, and an analysis of results submitted to date. Graphical abstract ᅟ.


Assuntos
Comportamento Cooperativo , Poluentes Ambientais/análise , Projetos de Pesquisa , Xenobióticos/análise , Cromatografia/métodos , Misturas Complexas , Coleta de Dados , Poeira , Educação , Exposição Ambiental , Poluentes Ambientais/normas , Poluentes Ambientais/toxicidade , Humanos , Laboratórios/organização & administração , Espectrometria de Massas/métodos , Controle de Qualidade , Padrões de Referência , Soro , Silicones/química , Estados Unidos , United States Environmental Protection Agency , Xenobióticos/normas , Xenobióticos/toxicidade
6.
Arch Toxicol ; 92(2): 587-600, 2018 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-29075892

RESUMO

In an effort to address a major challenge in chemical safety assessment, alternative approaches for characterizing systemic effect levels, a predictive model was developed. Systemic effect levels were curated from ToxRefDB, HESS-DB and COSMOS-DB from numerous study types totaling 4379 in vivo studies for 1247 chemicals. Observed systemic effects in mammalian models are a complex function of chemical dynamics, kinetics, and inter- and intra-individual variability. To address this complex problem, systemic effect levels were modeled at the study-level by leveraging study covariates (e.g., study type, strain, administration route) in addition to multiple descriptor sets, including chemical (ToxPrint, PaDEL, and Physchem), biological (ToxCast), and kinetic descriptors. Using random forest modeling with cross-validation and external validation procedures, study-level covariates alone accounted for approximately 15% of the variance reducing the root mean squared error (RMSE) from 0.96 log10 to 0.85 log10 mg/kg/day, providing a baseline performance metric (lower expectation of model performance). A consensus model developed using a combination of study-level covariates, chemical, biological, and kinetic descriptors explained a total of 43% of the variance with an RMSE of 0.69 log10 mg/kg/day. A benchmark model (upper expectation of model performance) was also developed with an RMSE of 0.5 log10 mg/kg/day by incorporating study-level covariates and the mean effect level per chemical. To achieve a representative chemical-level prediction, the minimum study-level predicted and observed effect level per chemical were compared reducing the RMSE from 1.0 to 0.73 log10 mg/kg/day, equivalent to 87% of predictions falling within an order-of-magnitude of the observed value. Although biological descriptors did not improve model performance, the final model was enriched for biological descriptors that indicated xenobiotic metabolism gene expression, oxidative stress, and cytotoxicity, demonstrating the importance of accounting for kinetics and non-specific bioactivity in predicting systemic effect levels. Herein, we generated an externally predictive model of systemic effect levels for use as a safety assessment tool and have generated forward predictions for over 30,000 chemicals.


Assuntos
Modelos Químicos , Testes de Toxicidade , Animais , Cosméticos/toxicidade , Bases de Dados de Compostos Químicos , Modelos Estatísticos , Toxicocinética
7.
J Chem Inf Model ; 57(1): 36-49, 2017 01 23.
Artigo em Inglês | MEDLINE | ID: mdl-28006899

RESUMO

There are little available toxicity data on the vast majority of chemicals in commerce. High-throughput screening (HTS) studies, such as those being carried out by the U.S. Environmental Protection Agency (EPA) ToxCast program in partnership with the federal Tox21 research program, can generate biological data to inform models for predicting potential toxicity. However, physicochemical properties are also needed to model environmental fate and transport, as well as exposure potential. The purpose of the present study was to generate an open-source quantitative structure-property relationship (QSPR) workflow to predict a variety of physicochemical properties that would have cross-platform compatibility to integrate into existing cheminformatics workflows. In this effort, decades-old experimental property data sets available within the EPA EPI Suite were reanalyzed using modern cheminformatics workflows to develop updated QSPR models capable of supplying computationally efficient, open, and transparent HTS property predictions in support of environmental modeling efforts. Models were built using updated EPI Suite data sets for the prediction of six physicochemical properties: octanol-water partition coefficient (logP), water solubility (logS), boiling point (BP), melting point (MP), vapor pressure (logVP), and bioconcentration factor (logBCF). The coefficient of determination (R2) between the estimated values and experimental data for the six predicted properties ranged from 0.826 (MP) to 0.965 (BP), with model performance for five of the six properties exceeding those from the original EPI Suite models. The newly derived models can be employed for rapid estimation of physicochemical properties within an open-source HTS workflow to inform fate and toxicity prediction models of environmental chemicals.


Assuntos
Fenômenos Químicos , Simulação por Computador , Poluentes Ambientais/química , Aprendizado de Máquina , Poluentes Ambientais/toxicidade , Informática , Relação Quantitativa Estrutura-Atividade , Solubilidade , Temperatura de Transição , Pressão de Vapor , Água/química
8.
J Chem Inf Model ; 57(11): 2874-2884, 2017 11 27.
Artigo em Inglês | MEDLINE | ID: mdl-29022712

RESUMO

We present a practical and easy-to-run in silico workflow exploiting a structure-based strategy making use of docking simulations to derive highly predictive classification models of the androgenic potential of chemicals. Models were trained on a high-quality chemical collection comprising 1689 curated compounds made available within the CoMPARA consortium from the US Environmental Protection Agency and were integrated with a two-step applicability domain whose implementation had the effect of improving both the confidence in prediction and statistics by reducing the number of false negatives. Among the nine androgen receptor X-ray solved structures, the crystal 2PNU (entry code from the Protein Data Bank) was associated with the best performing structure-based classification model. Three validation sets comprising each 2590 compounds extracted by the DUD-E collection were used to challenge model performance and the effectiveness of Applicability Domain implementation. Next, the 2PNU model was applied to screen and prioritize two collections of chemicals. The first is a small pool of 12 representative androgenic compounds that were accurately classified based on outstanding rationale at the molecular level. The second is a large external blind set of 55450 chemicals with potential for human exposure. We show how the use of molecular docking provides highly interpretable models and can represent a real-life option as an alternative nontesting method for predictive toxicology.


Assuntos
Androgênios/toxicidade , Simulação de Acoplamento Molecular , Androgênios/química , Androgênios/metabolismo , Simulação por Computador , Conformação Proteica , Relação Quantitativa Estrutura-Atividade , Receptores Androgênicos/química , Receptores Androgênicos/metabolismo
9.
Chem Res Toxicol ; 29(9): 1410-27, 2016 09 19.
Artigo em Inglês | MEDLINE | ID: mdl-27509301

RESUMO

The US Environmental Protection Agency's (EPA) Endocrine Disruptor Screening Program (EDSP) is using in vitro data generated from ToxCast/Tox21 high-throughput screening assays to assess the endocrine activity of environmental chemicals. Considering that in vitro assays may have limited metabolic capacity, inactive chemicals that are biotransformed into metabolites with endocrine bioactivity may be missed for further screening and testing. Therefore, there is a value in developing novel approaches to account for metabolism and endocrine activity of both parent chemicals and their associated metabolites. We used commercially available software to predict metabolites of 50 parent compounds, out of which 38 chemicals are known to have estrogenic metabolites, and 12 compounds and their metabolites are negative for estrogenic activity. Three ER QSAR models were used to determine potential estrogen bioactivity of the parent compounds and predicted metabolites, the outputs of the models were averaged, and the chemicals were then ranked based on the total estrogenicity of the parent chemical and metabolites. The metabolite prediction software correctly identified known estrogenic metabolites for 26 out of 27 parent chemicals with associated metabolite data, and 39 out of 46 estrogenic metabolites were predicted as potential biotransformation products derived from the parent chemical. The QSAR models estimated stronger estrogenic activity for the majority of the known estrogenic metabolites compared to their parent chemicals. Finally, the three models identified a similar set of parent compounds as top ranked chemicals based on the estrogenicity of putative metabolites. This proposed in silico approach is an inexpensive and rapid strategy for the detection of chemicals with estrogenic metabolites and may reduce potential false negative results from in vitro assays.


Assuntos
Simulação por Computador , Disruptores Endócrinos/toxicidade , Poluentes Ambientais/toxicidade , Estrogênios/química , Bases de Dados como Assunto , Disruptores Endócrinos/química , Disruptores Endócrinos/metabolismo , Poluentes Ambientais/metabolismo , Previsões , Humanos , Relação Quantitativa Estrutura-Atividade , Estados Unidos , United States Environmental Protection Agency
10.
Chem Res Toxicol ; 29(8): 1225-51, 2016 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-27367298

RESUMO

The U.S. Environmental Protection Agency's (EPA) ToxCast program is testing a large library of Agency-relevant chemicals using in vitro high-throughput screening (HTS) approaches to support the development of improved toxicity prediction models. Launched in 2007, Phase I of the program screened 310 chemicals, mostly pesticides, across hundreds of ToxCast assay end points. In Phase II, the ToxCast library was expanded to 1878 chemicals, culminating in the public release of screening data at the end of 2013. Subsequent expansion in Phase III has resulted in more than 3800 chemicals actively undergoing ToxCast screening, 96% of which are also being screened in the multi-Agency Tox21 project. The chemical library unpinning these efforts plays a central role in defining the scope and potential application of ToxCast HTS results. The history of the phased construction of EPA's ToxCast library is reviewed, followed by a survey of the library contents from several different vantage points. CAS Registry Numbers are used to assess ToxCast library coverage of important toxicity, regulatory, and exposure inventories. Structure-based representations of ToxCast chemicals are then used to compute physicochemical properties, substructural features, and structural alerts for toxicity and biotransformation. Cheminformatics approaches using these varied representations are applied to defining the boundaries of HTS testability, evaluating chemical diversity, and comparing the ToxCast library to potential target application inventories, such as used in EPA's Endocrine Disruption Screening Program (EDSP). Through several examples, the ToxCast chemical library is demonstrated to provide comprehensive coverage of the knowledge domains and target inventories of potential interest to EPA. Furthermore, the varied representations and approaches presented here define local chemistry domains potentially worthy of further investigation (e.g., not currently covered in the testing library or defined by toxicity "alerts") to strategically support data mining and predictive toxicology modeling moving forward.


Assuntos
Toxicologia
11.
Chem Res Toxicol ; 28(4): 738-51, 2015 Apr 20.
Artigo em Inglês | MEDLINE | ID: mdl-25697799

RESUMO

The U.S. Tox21 and EPA ToxCast program screen thousands of environmental chemicals for bioactivity using hundreds of high-throughput in vitro assays to build predictive models of toxicity. We represented chemicals based on bioactivity and chemical structure descriptors, then used supervised machine learning to predict in vivo hepatotoxic effects. A set of 677 chemicals was represented by 711 in vitro bioactivity descriptors (from ToxCast assays), 4,376 chemical structure descriptors (from QikProp, OpenBabel, PaDEL, and PubChem), and three hepatotoxicity categories (from animal studies). Hepatotoxicants were defined by rat liver histopathology observed after chronic chemical testing and grouped into hypertrophy (161), injury (101) and proliferative lesions (99). Classifiers were built using six machine learning algorithms: linear discriminant analysis (LDA), Naïve Bayes (NB), support vector machines (SVM), classification and regression trees (CART), k-nearest neighbors (KNN), and an ensemble of these classifiers (ENSMB). Classifiers of hepatotoxicity were built using chemical structure descriptors, ToxCast bioactivity descriptors, and hybrid descriptors. Predictive performance was evaluated using 10-fold cross-validation testing and in-loop, filter-based, feature subset selection. Hybrid classifiers had the best balanced accuracy for predicting hypertrophy (0.84 ± 0.08), injury (0.80 ± 0.09), and proliferative lesions (0.80 ± 0.10). Though chemical and bioactivity classifiers had a similar balanced accuracy, the former were more sensitive, and the latter were more specific. CART, ENSMB, and SVM classifiers performed the best, and nuclear receptor activation and mitochondrial functions were frequently found in highly predictive classifiers of hepatotoxicity. ToxCast and ToxRefDB provide the largest and richest publicly available data sets for mining linkages between the in vitro bioactivity of environmental chemicals and their adverse histopathological outcomes. Our findings demonstrate the utility of high-throughput assays for characterizing rodent hepatotoxicants, the benefit of using hybrid representations that integrate bioactivity and chemical structure, and the need for objective evaluation of classification performance.


Assuntos
Fígado/efeitos dos fármacos , Testes de Toxicidade , Animais , Técnicas In Vitro , Estrutura Molecular , Ratos
12.
J Cheminform ; 16(1): 101, 2024 Aug 16.
Artigo em Inglês | MEDLINE | ID: mdl-39152469

RESUMO

With the increased availability of chemical data in public databases, innovative techniques and algorithms have emerged for the analysis, exploration, visualization, and extraction of information from these data. One such technique is chemical grouping, where chemicals with common characteristics are categorized into distinct groups based on physicochemical properties, use, biological activity, or a combination. However, existing tools for chemical grouping often require specialized programming skills or the use of commercial software packages. To address these challenges, we developed a user-friendly chemical grouping workflow implemented in KNIME, a free, open-source, low/no-code, data analytics platform. The workflow serves as an all-encompassing tool, expertly incorporating a range of processes such as molecular descriptor calculation, feature selection, dimensionality reduction, hyperparameter search, and supervised and unsupervised machine learning methods, enabling effective chemical grouping and visualization of results. Furthermore, we implemented tools for interpretation, identifying key molecular descriptors for the chemical groups, and using natural language summaries to clarify the rationale behind these groupings. The workflow was designed to run seamlessly in both the KNIME local desktop version and KNIME Server WebPortal as a web application. It incorporates interactive interfaces and guides to assist users in a step-by-step manner. We demonstrate the utility of this workflow through a case study using an eye irritation and corrosion dataset.Scientific contributionsThis work presents a novel, comprehensive chemical grouping workflow in KNIME, enhancing accessibility by integrating a user-friendly graphical interface that eliminates the need for extensive programming skills. This workflow uniquely combines several features such as automated molecular descriptor calculation, feature selection, dimensionality reduction, and machine learning algorithms (both supervised and unsupervised), with hyperparameter optimization to refine chemical grouping accuracy. Moreover, we have introduced an innovative interpretative step and natural language summaries to elucidate the underlying reasons for chemical groupings, significantly advancing the usability of the tool and interpretability of the results.

13.
J Cheminform ; 16(1): 19, 2024 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-38378618

RESUMO

The rapid increase of publicly available chemical structures and associated experimental data presents a valuable opportunity to build robust QSAR models for applications in different fields. However, the common concern is the quality of both the chemical structure information and associated experimental data. This is especially true when those data are collected from multiple sources as chemical substance mappings can contain many duplicate structures and molecular inconsistencies. Such issues can impact the resulting molecular descriptors and their mappings to experimental data and, subsequently, the quality of the derived models in terms of accuracy, repeatability, and reliability. Herein we describe the development of an automated workflow to standardize chemical structures according to a set of standard rules and generate two and/or three-dimensional "QSAR-ready" forms prior to the calculation of molecular descriptors. The workflow was designed in the KNIME workflow environment and consists of three high-level steps. First, a structure encoding is read, and then the resulting in-memory representation is cross-referenced with any existing identifiers for consistency. Finally, the structure is standardized using a series of operations including desalting, stripping of stereochemistry (for two-dimensional structures), standardization of tautomers and nitro groups, valence correction, neutralization when possible, and then removal of duplicates. This workflow was initially developed to support collaborative modeling QSAR projects to ensure consistency of the results from the different participants. It was then updated and generalized for other modeling applications. This included modification of the "QSAR-ready" workflow to generate "MS-ready structures" to support the generation of substance mappings and searches for software applications related to non-targeted analysis mass spectrometry. Both QSAR and MS-ready workflows are freely available in KNIME, via standalone versions on GitHub, and as docker container resources for the scientific community. Scientific contribution: This work pioneers an automated workflow in KNIME, systematically standardizing chemical structures to ensure their readiness for QSAR modeling and broader scientific applications. By addressing data quality concerns through desalting, stereochemistry stripping, and normalization, it optimizes molecular descriptors' accuracy and reliability. The freely available resources in KNIME, GitHub, and docker containers democratize access, benefiting collaborative research and advancing diverse modeling endeavors in chemistry and mass spectrometry.

14.
Vaccine X ; 19: 100503, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38868522

RESUMO

Scorpion envenoming (SE) is a public health problem in developing countries. In Algeria, the population exposed to the risk of SE was estimated at 86.45% in 2019. Thus, the development of a vaccine to protect the exposed population against scorpion toxins would be a major advance in the fight against this disease. This work aimed to evaluate the immunoprotective effect of a Multiple Antigenic Peptide against the Aah II toxin of Androctonus australis hector scorpion, the most dangerous scorpion species in Algeria. The immunogen MAP1Aah2 was designed and tested accordingly. This molecule contains a B epitope, derived from Aah II toxin, linked by a spacer to a universal T epitope, derived from the tetanus toxin. The results showed that MAP1Aah2 was non-toxic despite the fact that its sequence was derived from Aah II toxin. The immunoenzymatic assay revealed that the 3 immunization regimens tested generated specific anti-MAP1Aah2 antibodies and cross-reacted with the toxin. Mice immunized with this immunogen were partially protected against mortality caused by challenge doses of 2 and 3 LD50 of the toxin. The survival rate and developed symptoms varied depending on the adjuvant and the challenge dose used. In the in vitro neutralization test, the immune sera of mice having received the immunogen with incomplete Freund's adjuvant neutralized a challenge dose of 2 LD50. Hence, the concept of using peptide dendrimers, based on linear epitopes of scorpion toxins, as immunogens against the parent toxin was established. However, the protective properties of the tested immunogen require further optimizations.

15.
Environ Health Perspect ; 132(8): 85002, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39106156

RESUMO

BACKGROUND: The field of toxicology has witnessed substantial advancements in recent years, particularly with the adoption of new approach methodologies (NAMs) to understand and predict chemical toxicity. Class-based methods such as clustering and classification are key to NAMs development and application, aiding the understanding of hazard and risk concerns associated with groups of chemicals without additional laboratory work. Advances in computational chemistry, data generation and availability, and machine learning algorithms represent important opportunities for continued improvement of these techniques to optimize their utility for specific regulatory and research purposes. However, due to their intricacy, deep understanding and careful selection are imperative to align the adequate methods with their intended applications. OBJECTIVES: This commentary aims to deepen the understanding of class-based approaches by elucidating the pivotal role of chemical similarity (structural and biological) in clustering and classification approaches (CCAs). It addresses the dichotomy between general end point-agnostic similarity, often entailing unsupervised analysis, and end point-specific similarity necessitating supervised learning. The goal is to highlight the nuances of these approaches, their applications, and common misuses. DISCUSSION: Understanding similarity is pivotal in toxicological research involving CCAs. The effectiveness of these approaches depends on the right definition and measure of similarity, which varies based on context and objectives of the study. This choice is influenced by how chemical structures are represented and the respective labels indicating biological activity, if applicable. The distinction between unsupervised clustering and supervised classification methods is vital, requiring the use of end point-agnostic vs. end point-specific similarity definition. Separate use or combination of these methods requires careful consideration to prevent bias and ensure relevance for the goal of the study. Unsupervised methods use end point-agnostic similarity measures to uncover general structural patterns and relationships, aiding hypothesis generation and facilitating exploration of datasets without the need for predefined labels or explicit guidance. Conversely, supervised techniques demand end point-specific similarity to group chemicals into predefined classes or to train classification models, allowing accurate predictions for new chemicals. Misuse can arise when unsupervised methods are applied to end point-specific contexts, like analog selection in read-across, leading to erroneous conclusions. This commentary provides insights into the significance of similarity and its role in supervised classification and unsupervised clustering approaches. https://doi.org/10.1289/EHP14001.


Assuntos
Aprendizado de Máquina , Análise por Conglomerados , Aprendizado de Máquina não Supervisionado , Toxicologia/métodos , Algoritmos
16.
J Chem Inf Model ; 53(4): 867-78, 2013 Apr 22.
Artigo em Inglês | MEDLINE | ID: mdl-23469921

RESUMO

The European REACH regulation requires information on ready biodegradation, which is a screening test to assess the biodegradability of chemicals. At the same time REACH encourages the use of alternatives to animal testing which includes predictions from quantitative structure-activity relationship (QSAR) models. The aim of this study was to build QSAR models to predict ready biodegradation of chemicals by using different modeling methods and types of molecular descriptors. Particular attention was given to data screening and validation procedures in order to build predictive models. Experimental values of 1055 chemicals were collected from the webpage of the National Institute of Technology and Evaluation of Japan (NITE): 837 and 218 molecules were used for calibration and testing purposes, respectively. In addition, models were further evaluated using an external validation set consisting of 670 molecules. Classification models were produced in order to discriminate biodegradable and nonbiodegradable chemicals by means of different mathematical methods: k nearest neighbors, partial least squares discriminant analysis, and support vector machines, as well as their consensus models. The proposed models and the derived consensus analysis demonstrated good classification performances with respect to already published QSAR models on biodegradation. Relationships between the molecular descriptors selected in each QSAR model and biodegradability were evaluated.


Assuntos
Modelos Estatísticos , Bibliotecas de Moléculas Pequenas/metabolismo , Biodegradação Ambiental , Bases de Dados de Compostos Químicos , Estrutura Molecular , Relação Quantitativa Estrutura-Atividade , Bibliotecas de Moléculas Pequenas/química , Bibliotecas de Moléculas Pequenas/classificação
17.
Molecules ; 17(5): 4791-810, 2012 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-22534664

RESUMO

One of the OECD principles for model validation requires defining the Applicability Domain (AD) for the QSAR models. This is important since the reliable predictions are generally limited to query chemicals structurally similar to the training compounds used to build the model. Therefore, characterization of interpolation space is significant in defining the AD and in this study some existing descriptor-based approaches performing this task are discussed and compared by implementing them on existing validated datasets from the literature. Algorithms adopted by different approaches allow defining the interpolation space in several ways, while defined thresholds contribute significantly to the extrapolations. For each dataset and approach implemented for this study, the comparison analysis was carried out by considering the model statistics and relative position of test set with respect to the training space.


Assuntos
Modelos Estatísticos , Relação Quantitativa Estrutura-Atividade , Algoritmos , Modelos Químicos
18.
Front Pharmacol ; 13: 864742, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35496281

RESUMO

Regulatory toxicology testing has traditionally relied on in vivo methods to inform decision-making. However, scientific, practical, and ethical considerations have led to an increased interest in the use of in vitro and in silico methods to fill data gaps. While in vitro experiments have the advantage of rapid application across large chemical sets, interpretation of data coming from these non-animal methods can be challenging due to the mechanistic nature of many assays. In vitro to in vivo extrapolation (IVIVE) has emerged as a computational tool to help facilitate this task. Specifically, IVIVE uses physiologically based pharmacokinetic (PBPK) models to estimate tissue-level chemical concentrations based on various dosing parameters. This approach is used to estimate the administered dose needed to achieve in vitro bioactivity concentrations within the body. IVIVE results can be useful to inform on metrics such as margin of exposure or to prioritize potential chemicals of concern, but the PBPK models used in this approach have extensive data requirements. Thus, access to input parameters, as well as the technical requirements of applying and interpreting models, has limited the use of IVIVE as a routine part of in vitro testing. As interest in using non-animal methods for regulatory and research contexts continues to grow, our perspective is that access to computational support tools for PBPK modeling and IVIVE will be essential for facilitating broader application and acceptance of these techniques, as well as for encouraging the most scientifically sound interpretation of in vitro results. We highlight recent developments in two open-access computational support tools for PBPK modeling and IVIVE accessible via the Integrated Chemical Environment (https://ice.ntp.niehs.nih.gov/), demonstrate the types of insights these tools can provide, and discuss how these analyses may inform in vitro-based decision making.

19.
Toxicol Sci ; 188(1): 34-47, 2022 06 28.
Artigo em Inglês | MEDLINE | ID: mdl-35426934

RESUMO

Regulatory agencies rely upon rodent in vivo acute oral toxicity data to determine hazard categorization, require appropriate precautionary labeling, and perform quantitative risk assessments. As the field of toxicology moves toward animal-free new approach methodologies (NAMs), there is a pressing need to develop a reliable, robust reference data set to characterize the reproducibility and inherent variability in the in vivo acute oral toxicity test method, which would serve to contextualize results and set expectations regarding NAM performance. Such a data set is also needed for training and evaluating computational models. To meet these needs, rat acute oral LD50 data from multiple databases were compiled, curated, and analyzed to characterize variability and reproducibility of results across a set of up to 2441 chemicals with multiple independent study records. Conditional probability analyses reveal that replicate studies only result in the same hazard categorization on average at 60% likelihood. Although we did not have sufficient study metadata to evaluate the impact of specific protocol components (eg, strain, age, or sex of rat, feed used, treatment vehicle, etc.), studies were assumed to follow standard test guidelines. We investigated, but could not attribute, various chemical properties as the sources of variability (ie, chemical structure, physiochemical properties, functional use). Thus, we conclude that inherent biological or protocol variability likely underlies the variance in the results. Based on the observed variability, we were able to quantify a margin of uncertainty of ±0.24 log10 (mg/kg) associated with discrete in vivo rat acute oral LD50 values.


Assuntos
Reprodutibilidade dos Testes , Animais , Bases de Dados Factuais , Probabilidade , Ratos , Medição de Risco/métodos , Testes de Toxicidade Aguda/métodos
20.
Birth Defects Res ; 114(16): 1037-1055, 2022 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-35532929

RESUMO

BACKGROUND: The developmental toxicity potential (dTP) concentration from the devTOX quickPredict (devTOXqP ) assay, a metabolomics-based human induced pluripotent stem cell assay, predicts a chemical's developmental toxicity potency. Here, in vitro to in vivo extrapolation (IVIVE) approaches were applied to address whether the devTOXqP assay could quantitatively predict in vivo developmental toxicity lowest effect levels (LELs) for the prototypical teratogen valproic acid (VPA) and a group of structural analogues. METHODS: VPA and a series of structural analogues were tested with the devTOXqP assay to determine dTP concentration and we estimated the equivalent administered doses (EADs) that would lead to plasma concentrations equivalent to the in vitro dTP concentrations. The EADs were compared to the LELs in rat developmental toxicity studies, human clinical doses, and EADs reported using other in vitro assays. To evaluate the impact of different pharmacokinetic (PK) models on IVIVE outcomes, we compared EADs predicted using various open-source and commercially available PK and physiologically based PK (PBPK) models. To evaluate the effect of in vitro kinetics, an equilibrium distribution model was applied to translate dTP concentrations to free medium concentrations before subsequent IVIVE analyses. RESULTS: The EAD estimates for the VPA analogues based on different PK/PBPK models were quantitatively similar to in vivo data from both rats and humans, where available, and the derived rank order of the chemicals was consistent with observed in vivo developmental toxicity. Different models were identified that provided accurate predictions for rat prenatal LELs and conservative estimates of human safe exposure. The impact of in vitro kinetics on EAD estimates is chemical-dependent. EADs from this study were within range of predicted doses from other in vitro and model organism data. CONCLUSIONS: This study highlights the importance of pharmacokinetic considerations when using in vitro assays and demonstrates the utility of the devTOXqP human stem cell-based platform to quantitatively assess a chemical's developmental toxicity potency.


Assuntos
Células-Tronco Pluripotentes Induzidas , Ácido Valproico , Animais , Feminino , Humanos , Gravidez , Ratos , Teratogênicos/toxicidade , Ácido Valproico/toxicidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA