Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 148
Filtrar
1.
ArXiv ; 2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39253636

RESUMEN

Researchers in biomedical research, public health and the life sciences often spend weeks or months discovering, accessing, curating, and integrating data from disparate sources, significantly delaying the onset of actual analysis and innovation. Instead of countless developers creating redundant and inconsistent data pipelines, BioBricks.ai offers a centralized data repository and a suite of developer-friendly tools to simplify access to scientific data. Currently, BioBricks.ai delivers over ninety biological and chemical datasets. It provides a package manager-like system for installing and managing dependencies on data sources. Each 'brick' is a Data Version Control git repository that supports an updateable pipeline for extraction, transformation, and loading data into the BioBricks.ai backend at https://biobricks.ai. Use cases include accelerating data science workflows and facilitating the creation of novel data assets by integrating multiple datasets into unified, harmonized resources. In conclusion, BioBricks.ai offers an opportunity to accelerate access and use of public data through a single open platform.

2.
Sci Total Environ ; 953: 176003, 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39236816

RESUMEN

Brazil stands as the world's leading coffee producer, where the extensive use of pesticides is economically critical yet poses health and environmental risks due to their non-selective mechanisms of action. Specifically, triazole fungicides are widely used in agriculture to manage fungal diseases and are known to disrupt mammalian CYP450 and liver microsomal enzymes. This research establishes a framework for risk characterization of human exposure to triazole fungicides by internal-dose biomonitoring, biochemical marker measurements, and integration of high-throughput screening (HTS) data via computational toxicology workflows from the Integrated Chemical Environment (ICE). Volunteers from the southern region of Minas Gerais, Brazil, were divided into two groups: farmworkers and spouses occupationally and environmentally exposed to pesticides from rural areas (n = 140) and individuals from the urban area to serve as a comparison group (n = 50). Three triazole fungicides, cyproconazole, epoxiconazole, and triadimenol, were detected in the urine samples of both men and women in the rural group. Androstenedione and testosterone hormones were significantly reduced in the farmworker group (Mann-Whitney test, p < 0.0001). The data show a significant inverse association of testosterone with cholesterol, LDL, VLDL, triglycerides, and glucose and a direct association with HDL (Spearman's correlation, p < 0.05). In the ICE workflow, active in vitro HTS assays were identified for the three measured triazoles and three other active ingredients from the pesticide formulations. The curated HTS data confirm bioactivities predominantly related to steroid hormone metabolism, cellular stress processes, and CYP450 enzymes impacted by fungicide exposure at occupationally and environmentally relevant concentrations based on the in vitro to in vivo extrapolation models. These results characterize the potentially significant human health risk, particularly from the high frequency and intensity of exposure to epoxiconazole. This study showcases the critical role of biomonitoring and utility of computational tools in evaluating pesticide exposure and minimizing the risk.

3.
Environ Health Perspect ; 132(8): 85002, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39106156

RESUMEN

BACKGROUND: The field of toxicology has witnessed substantial advancements in recent years, particularly with the adoption of new approach methodologies (NAMs) to understand and predict chemical toxicity. Class-based methods such as clustering and classification are key to NAMs development and application, aiding the understanding of hazard and risk concerns associated with groups of chemicals without additional laboratory work. Advances in computational chemistry, data generation and availability, and machine learning algorithms represent important opportunities for continued improvement of these techniques to optimize their utility for specific regulatory and research purposes. However, due to their intricacy, deep understanding and careful selection are imperative to align the adequate methods with their intended applications. OBJECTIVES: This commentary aims to deepen the understanding of class-based approaches by elucidating the pivotal role of chemical similarity (structural and biological) in clustering and classification approaches (CCAs). It addresses the dichotomy between general end point-agnostic similarity, often entailing unsupervised analysis, and end point-specific similarity necessitating supervised learning. The goal is to highlight the nuances of these approaches, their applications, and common misuses. DISCUSSION: Understanding similarity is pivotal in toxicological research involving CCAs. The effectiveness of these approaches depends on the right definition and measure of similarity, which varies based on context and objectives of the study. This choice is influenced by how chemical structures are represented and the respective labels indicating biological activity, if applicable. The distinction between unsupervised clustering and supervised classification methods is vital, requiring the use of end point-agnostic vs. end point-specific similarity definition. Separate use or combination of these methods requires careful consideration to prevent bias and ensure relevance for the goal of the study. Unsupervised methods use end point-agnostic similarity measures to uncover general structural patterns and relationships, aiding hypothesis generation and facilitating exploration of datasets without the need for predefined labels or explicit guidance. Conversely, supervised techniques demand end point-specific similarity to group chemicals into predefined classes or to train classification models, allowing accurate predictions for new chemicals. Misuse can arise when unsupervised methods are applied to end point-specific contexts, like analog selection in read-across, leading to erroneous conclusions. This commentary provides insights into the significance of similarity and its role in supervised classification and unsupervised clustering approaches. https://doi.org/10.1289/EHP14001.


Asunto(s)
Aprendizaje Automático , Análisis por Conglomerados , Aprendizaje Automático no Supervisado , Toxicología/métodos , Algoritmos
4.
J Cheminform ; 16(1): 101, 2024 Aug 16.
Artículo en Inglés | MEDLINE | ID: mdl-39152469

RESUMEN

With the increased availability of chemical data in public databases, innovative techniques and algorithms have emerged for the analysis, exploration, visualization, and extraction of information from these data. One such technique is chemical grouping, where chemicals with common characteristics are categorized into distinct groups based on physicochemical properties, use, biological activity, or a combination. However, existing tools for chemical grouping often require specialized programming skills or the use of commercial software packages. To address these challenges, we developed a user-friendly chemical grouping workflow implemented in KNIME, a free, open-source, low/no-code, data analytics platform. The workflow serves as an all-encompassing tool, expertly incorporating a range of processes such as molecular descriptor calculation, feature selection, dimensionality reduction, hyperparameter search, and supervised and unsupervised machine learning methods, enabling effective chemical grouping and visualization of results. Furthermore, we implemented tools for interpretation, identifying key molecular descriptors for the chemical groups, and using natural language summaries to clarify the rationale behind these groupings. The workflow was designed to run seamlessly in both the KNIME local desktop version and KNIME Server WebPortal as a web application. It incorporates interactive interfaces and guides to assist users in a step-by-step manner. We demonstrate the utility of this workflow through a case study using an eye irritation and corrosion dataset.Scientific contributionsThis work presents a novel, comprehensive chemical grouping workflow in KNIME, enhancing accessibility by integrating a user-friendly graphical interface that eliminates the need for extensive programming skills. This workflow uniquely combines several features such as automated molecular descriptor calculation, feature selection, dimensionality reduction, and machine learning algorithms (both supervised and unsupervised), with hyperparameter optimization to refine chemical grouping accuracy. Moreover, we have introduced an innovative interpretative step and natural language summaries to elucidate the underlying reasons for chemical groupings, significantly advancing the usability of the tool and interpretability of the results.

5.
ALTEX ; 2024 Jul 08.
Artículo en Inglés | MEDLINE | ID: mdl-38979646

RESUMEN

Dysregulation of Vascular Endothelial Growth Factor (VEGF) and its receptor (VEGFR) contributes to atherosclerosis and cardiovascular disease (CVD), making it a potential target for CVD risk assessment. High throughput screening (HTS) approaches have resulted in large-scale in vitro data, providing mechanistic information that can help assess chemical toxicity and identify molecular-initiating events (MIEs) of adverse outcome pathways (AOPs). AOPs represent a logical sequence of biological responses contributing to toxicity and are valuable tools to inform chemical risk assessments. Here, we used HTS data to formulate an AOP relating VEGF signaling perturbation to atherosclerosis. ToxCast, Tox21, and PubChem data were evaluated to obtain bio-profiles of 4165 compounds active in assays targeting VEGFR. Cheminformatics analysis identified 109 enriched structural fingerprints. Applying a subspace clustering approach based on chemical structure bioactivity yielded 12 primary targets, whose relevance to CVD was confirmed by an AI-assisted literature review. An AOP was hypothesized by coupling mechanistic relationships highlighted by HTS data with literature review findings, linking Serotonin Receptor (HTR), Estrogen Receptor Alpha (ERα), and Vasopressin Receptor (AVPR) targets with VEGFR activity, angiogenic signaling, and atherosclerosis. Several endocrine disrupting chemicals (EDCs), e.g., bisphenols, triclosan, dichlorodiphenyltrichloroethane (DDT), and polychlorinated biphenyls (PCBs), were identified as relevant chemical stressors. Subspace clustering of these chemicals evaluated potential MIEs and highlighted associations with use-case classes. By applying computational methods to profile HTS data and hypothesize a mechanistic AOP, this study proposes a data-driven approach to evaluating environmental cardiotoxicity, which could eventually supplement and reduce the need for animal testing in toxicological assessments.


This study explores how disruptions in VEGFR contribute to atherosclerosis, the buildup of plaques in arteries that can lead to CVD. By analyzing data from HTS relevant to CV health, researchers identify how different chemicals affect VEGFR and potentially cause CVD. Using these screening methods, which quickly test many chemicals, the study identifies specific biological changes leading to adverse health outcomes. This research aims to develop methods to assess chemical toxicity without relying on animal testing, making it relevant to human health. The findings link certain chemicals e.g., bisphenols and DDT, changing VEGRF activity and the development of atherosclerosis. An adverse outcome pathway (AOP) framework maps the sequence of biological events from molecular perturbations to disease, providing mechanistic insight and identifying chemicals impacting the AOP targets. This approach helps understand the risks posed by environmental chemicals and protects public health while reducing animal experiments.

6.
Biotechnol J ; 19(6): e2300659, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38863121

RESUMEN

All-trans retinoic acid (atRA) is an endogenous ligand of the retinoic acid receptors, which heterodimerize with retinoid X receptors. AtRA is generated in tissues from vitamin A (retinol) metabolism to form a paracrine signal and is locally degraded by cytochrome P450 family 26 (CYP26) enzymes. The CYP26 family consists of three subtypes: A1, B1, and C1, which are differentially expressed during development. This study aims to develop and validate a high throughput screening assay to identify CYP26A1 inhibitors in a cell-free system using a luminescent P450-Glo assay technology. The assay performed well with a signal to background ratio of 25.7, a coefficient of variation of 8.9%, and a Z-factor of 0.7. To validate the assay, we tested a subset of 39 compounds that included known CYP26 inhibitors and retinoids, as well as positive and negative control compounds selected from the literature and/or the ToxCast/Tox21 portfolio. Known CYP26A1 inhibitors were confirmed, and predicted CYP26A1 inhibitors, such as chlorothalonil, prochloraz, and SSR126768, were identified, demonstrating the reliability and robustness of the assay. Given the general importance of atRA as a morphogenetic signal and the localized expression of Cyp26a1 in embryonic tissues, a validated CYP26A1 assay has important implications for evaluating the potential developmental toxicity of chemicals.


Asunto(s)
Ensayos Analíticos de Alto Rendimiento , Ácido Retinoico 4-Hidroxilasa , Ensayos Analíticos de Alto Rendimiento/métodos , Ácido Retinoico 4-Hidroxilasa/metabolismo , Ácido Retinoico 4-Hidroxilasa/genética , Humanos , Tretinoina/farmacología , Tretinoina/metabolismo , Inhibidores Enzimáticos del Citocromo P-450/farmacología , Reproducibilidad de los Resultados
7.
Toxics ; 12(6)2024 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-38922117

RESUMEN

Organophosphorus flame retardants (OPFRs) are abundant and persistent in the environment but have limited toxicity information. Their similarity in structure to organophosphate pesticides presents great concern for developmental neurotoxicity (DNT). However, current in vivo testing is not suitable to provide DNT information on the amount of OPFRs that lack data. Over the past decade, an in vitro battery was developed to enhance DNT assessment, consisting of assays that evaluate cellular processes in neurodevelopment and function. In this study, behavioral data of small model organisms were also included. To assess if these assays provide sufficient mechanistic coverage to prioritize chemicals for further testing and/or identify hazards, an integrated approach to testing and assessment (IATA) was developed with additional information from the Integrated Chemical Environment (ICE) and the literature. Human biomonitoring and exposure data were identified and physiologically-based toxicokinetic models were applied to relate in vitro toxicity data to human exposure based on maximum plasma concentration. Eight OPFRs were evaluated, including aromatic OPFRs (triphenyl phosphate (TPHP), isopropylated phenyl phosphate (IPP), 2-ethylhexyl diphenyl phosphate (EHDP), tricresyl phosphate (TMPP), isodecyl diphenyl phosphate (IDDP), tert-butylphenyl diphenyl phosphate (BPDP)) and halogenated FRs ((Tris(1,3-dichloro-2-propyl) phosphate (TDCIPP), tris(2-chloroethyl) phosphate (TCEP)). Two representative brominated flame retardants (BFRs) (2,2'4,4'-tetrabromodiphenyl ether (BDE-47) and 3,3',5,5'-tetrabromobisphenol A (TBBPA)) with known DNT potential were selected for toxicity benchmarking. Data from the DNT battery indicate that the aromatic OPFRs have activity at similar concentrations as the BFRs and should therefore be evaluated further. However, these assays provide limited information on the mechanism of the compounds. By integrating information from ICE and the literature, endocrine disruption was identified as a potential mechanism. This IATA case study indicates that human exposure to some OPFRs could lead to a plasma concentration similar to those exerting in vitro activities, indicating potential concern for human health.

8.
ALTEX ; 41(3): 402-424, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38898799

RESUMEN

The webinar series and workshop titled "Trust Your Gut: Establishing Confidence in Gastrointestinal Models ­ An Overview of the State of the Science and Contexts of Use" was co-organized by NICEATM, NIEHS, FDA, EPA, CPSC, DoD, and the Johns Hopkins Center for Alternatives to Animal Testing (CAAT) and hosted at the National Institutes of Health in Bethesda, MD, USA on October 11-12, 2023. New approach methods (NAMs) for assessing issues of gastrointestinal tract (GIT)- related toxicity offer promise in addressing some of the limitations associated with animal-based assessments. GIT NAMs vary in complexity, from two-dimensional monolayer cell line-based systems to sophisticated 3-dimensional organoid systems derived from human primary cells. Despite advances in GIT NAMs, challenges remain in fully replicating the complex interactions and pro­cesses occurring within the human GIT. Presentations and discussions addressed regulatory needs, challenges, and innovations in incorporating NAMs into risk assessment frameworks; explored the state of the science in using NAMs for evaluating systemic toxicity, understanding absorption and pharmacokinetics, evaluating GIT toxicity, and assessing potential allergenicity; and discussed strengths, limitations, and data gaps of GIT NAMs as well as steps needed to establish confidence in these models for use in the regulatory setting.


Non-animal methods to assess whether chemicals may be toxic to the human digestive tract promise to complement or improve on animal-based methods. These approaches, which are based on human or animal cells and/or computer models, are faced with their own technical challenges and need to be shown to predict adverse effects in humans. Regulators are tasked with evaluating submitted data to best protect human health and the environment. A webinar series and workshop brought together scientists from academia, industry, military, and regulatory authorities from dif­ferent countries to discuss how non-animal methods can be integrated into the risk assessment of drugs, food additives, dietary supplements, pesticides, and industrial chemicals for gastrointestinal toxicity.


Asunto(s)
Alternativas a las Pruebas en Animales , Tracto Gastrointestinal , Humanos , Alternativas a las Pruebas en Animales/métodos , Animales , Modelos Biológicos , Medición de Riesgo/métodos , Pruebas de Toxicidad/métodos
9.
Regul Toxicol Pharmacol ; 150: 105648, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38772524

RESUMEN

Inhalation is a critical route through which substances can exert adverse effects in humans; therefore, it is important to characterize the potential effects that inhaled substances may have on the human respiratory tract by using fit for purpose, reliable, and human relevant testing tools. In regulatory toxicology testing, rats have primarily been used to assess the effects of inhaled substances as they-being mammals-share similarities in structure and function of the respiratory tract with humans. However, questions about inter-species differences impacting the predictability of human effects have surfaced. Disparities in macroscopic anatomy, microscopic anatomy, or physiology, such as breathing mode (e.g., nose-only versus oronasal breathing), airway structure (e.g., complexity of the nasal turbinates), cell types and location within the respiratory tract, and local metabolism may impact inhalation toxicity testing results. This review shows that these key differences describe uncertainty in the use of rat data to predict human effects and supports an opportunity to harness modern toxicology tools and a detailed understanding of the human respiratory tract to develop testing approaches grounded in human biology. Ultimately, as the regulatory purpose is protecting human health, there is a need for testing approaches based on human biology and mechanisms of toxicity.


Asunto(s)
Sistema Respiratorio , Especificidad de la Especie , Pruebas de Toxicidad , Animales , Humanos , Sistema Respiratorio/efectos de los fármacos , Sistema Respiratorio/anatomía & histología , Ratas , Pruebas de Toxicidad/métodos , Exposición por Inhalación/efectos adversos , Medición de Riesgo
10.
Regul Toxicol Pharmacol ; 149: 105614, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38574841

RESUMEN

The United States Environmental Protection Agency (USEPA) uses the lethal dose 50% (LD50) value from in vivo rat acute oral toxicity studies for pesticide product label precautionary statements and environmental risk assessment (RA). The Collaborative Acute Toxicity Modeling Suite (CATMoS) is a quantitative structure-activity relationship (QSAR)-based in silico approach to predict rat acute oral toxicity that has the potential to reduce animal use when registering a new pesticide technical grade active ingredient (TGAI). This analysis compared LD50 values predicted by CATMoS to empirical values from in vivo studies for the TGAIs of 177 conventional pesticides. The accuracy and reliability of the model predictions were assessed relative to the empirical data in terms of USEPA acute oral toxicity categories and discrete LD50 values for each chemical. CATMoS was most reliable at placing pesticide TGAIs in acute toxicity categories III (>500-5000 mg/kg) and IV (>5000 mg/kg), with 88% categorical concordance for 165 chemicals with empirical in vivo LD50 values ≥ 500 mg/kg. When considering an LD50 for RA, CATMoS predictions of 2000 mg/kg and higher were found to agree with empirical values from limit tests (i.e., single, high-dose tests) or definitive results over 2000 mg/kg with few exceptions.


Asunto(s)
Simulación por Computador , Plaguicidas , Relación Estructura-Actividad Cuantitativa , Pruebas de Toxicidad Aguda , United States Environmental Protection Agency , Animales , Medición de Riesgo , Plaguicidas/toxicidad , Dosificación Letal Mediana , Ratas , Administración Oral , Pruebas de Toxicidad Aguda/métodos , Estados Unidos , Reproducibilidad de los Resultados
11.
ALTEX ; 41(2): 179-201, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38629803

RESUMEN

When The Principles of Humane Experimental Technique was published in 1959, authors William Russell and Rex Burch had a modest goal: to make researchers think about what they were doing in the laboratory ­ and to do it more humanely. Sixty years later, their groundbreaking book was celebrated for inspiring a revolution in science and launching a new field: The 3Rs of alternatives to animal experimentation. On November 22, 2019, some pioneering and leading scientists and researchers in the field gathered at the Johns Hopkins Bloomberg School of Public Health in Bal­timore for the 60 Years of the 3Rs Symposium: Lessons Learned and the Road Ahead. The event was sponsored by the Johns Hopkins Center for Alternatives to Animal Testing (CAAT), the Foundation for Chemistry Research and Initiatives, the Alternative Research & Development Foundation (ARDF), the American Cleaning Institute (ACI), the International Fragrance Association (IFRA), the Institute for In Vitro Sciences (IIVS), John "Jack" R. Fowle III, and the Society of Toxicology (SoT). Fourteen pres­entations shared the history behind the groundbreaking publication, international efforts to achieve its aims, stumbling blocks to progress, as well as remarkable achievements. The day was a tribute to Russell and Burch, and a testament to what is possible when people from many walks of life ­ science, government, and industry ­ work toward a common goal.


William Russell and Rex Burch published their book The Principles of Humane Experimental Technique in 1959. The book encouraged researchers to replace animal experiments where it was possible, to refine experiments with animals in order to reduce their suffering, and to reduce the number of animals that had to be used for experiments to the minimum. Sixty years later, a group of pioneering and leading scientists and researchers in the field gathered to share how the publi­cation came about and how the vision inspired international collaborations and successes on many different levels including new laws. The paper includes an overview of important milestones in the history of alternatives to animal experimentation.


Asunto(s)
Experimentación Animal , Alternativas a las Pruebas en Animales , Animales , Alternativas a las Pruebas en Animales/métodos , Bienestar del Animal , Proyectos de Investigación
12.
Front Toxicol ; 6: 1321857, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38482198

RESUMEN

Introduction: Skin sensitization, which leads to allergic contact dermatitis, is a key toxicological endpoint with high occupational and consumer prevalence. This study optimized several in vitro assays listed in OECD skin sensitization test guidelines for use on a quantitative high-throughput screening (qHTS) platform and performed in silico model predictions to assess the skin sensitization potential of prioritized compounds from the Tox21 10K compound library. Methods: First, we screened the entire Tox21 10K compound library using a qHTS KeratinoSensTM (KS) assay and built a quantitative structure-activity relationship (QSAR) model based on the KS results. From the qHTS KS screening results, we prioritized 288 compounds to cover a wide range of structural chemotypes and tested them in the solid phase extraction-tandem mass spectrometry (SPE-MS/MS) direct peptide reactivity assay (DPRA), IL-8 homogeneous time-resolved fluorescence (HTRF) assay, CD86 and CD54 surface expression in THP1 cells, and predicted in silico sensitization potential using the OECD QSAR Toolbox (v4.5). Results: Interpreting tiered qHTS datasets using a defined approach showed the effectiveness and efficiency of in vitro methods. We selected structural chemotypes to present this diverse chemical collection and to explore previously unidentified structural contributions to sensitization potential. Discussion: Here, we provide a skin sensitization dataset of unprecedented size, along with associated tools, and analysis designed to support chemical assessments.

13.
Arch Toxicol ; 98(5): 1253-1269, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38483583

RESUMEN

Since the 1940s, patch tests in healthy volunteers (Human Predictive Patch Tests, HPPTs) have been used to identify chemicals that cause skin sensitization in humans. Recently, we reported the results of a major curation effort to support the development of OECD Guideline 497 on Defined Approaches (DAs) for skin sensitization (OECD in Guideline No. 497: Defined Approaches on Skin Sensitisation, 2021a. https://doi.org/10.1787/b92879a4-en ). In the course of this work, we compiled and published a database of 2277 HPPT results for 1366 unique test substances (Strickland et al. in Arch Toxicol 97:2825-2837, 2023. https://doi.org/10.1007/s00204-023-03530-3 ). Here we report a detailed analysis of the value of HPPT data for classification of chemicals as skin sensitizers under the United Nations' Globally Harmonized System of Classification and Labelling of Chemicals (GHS). As a result, we propose the dose per skin area (DSA) used for classification by the GHS to be replaced by or complemented with a dose descriptor that may better reflect sensitization incidence [e.g., the DSA causing induction of sensitization in one individual (DSA1+) or the DSA leading to an incidence of induction in 5% of the tested individuals (DSA05)]. We also propose standardized concepts and workflows for assessing individual HPPT results, for integrating multiple HPPT results and for using them in concert with Local Lymph Node Assay (LLNA) data in a weight of evidence (WoE) assessment. Overall, our findings show that HPPT results are often not sufficient for deriving unambiguous classifications on their own. However, where they are, the resulting classifications are reliable and reproducible and can be integrated well with those from other skin sensitization data, such as the LLNA.


Asunto(s)
Dermatitis Alérgica por Contacto , Humanos , Pruebas del Parche , Dermatitis Alérgica por Contacto/etiología , Alérgenos/toxicidad , Piel , Ensayo del Nódulo Linfático Local
14.
J Cheminform ; 16(1): 19, 2024 Feb 20.
Artículo en Inglés | MEDLINE | ID: mdl-38378618

RESUMEN

The rapid increase of publicly available chemical structures and associated experimental data presents a valuable opportunity to build robust QSAR models for applications in different fields. However, the common concern is the quality of both the chemical structure information and associated experimental data. This is especially true when those data are collected from multiple sources as chemical substance mappings can contain many duplicate structures and molecular inconsistencies. Such issues can impact the resulting molecular descriptors and their mappings to experimental data and, subsequently, the quality of the derived models in terms of accuracy, repeatability, and reliability. Herein we describe the development of an automated workflow to standardize chemical structures according to a set of standard rules and generate two and/or three-dimensional "QSAR-ready" forms prior to the calculation of molecular descriptors. The workflow was designed in the KNIME workflow environment and consists of three high-level steps. First, a structure encoding is read, and then the resulting in-memory representation is cross-referenced with any existing identifiers for consistency. Finally, the structure is standardized using a series of operations including desalting, stripping of stereochemistry (for two-dimensional structures), standardization of tautomers and nitro groups, valence correction, neutralization when possible, and then removal of duplicates. This workflow was initially developed to support collaborative modeling QSAR projects to ensure consistency of the results from the different participants. It was then updated and generalized for other modeling applications. This included modification of the "QSAR-ready" workflow to generate "MS-ready structures" to support the generation of substance mappings and searches for software applications related to non-targeted analysis mass spectrometry. Both QSAR and MS-ready workflows are freely available in KNIME, via standalone versions on GitHub, and as docker container resources for the scientific community. Scientific contribution: This work pioneers an automated workflow in KNIME, systematically standardizing chemical structures to ensure their readiness for QSAR modeling and broader scientific applications. By addressing data quality concerns through desalting, stereochemistry stripping, and normalization, it optimizes molecular descriptors' accuracy and reliability. The freely available resources in KNIME, GitHub, and docker containers democratize access, benefiting collaborative research and advancing diverse modeling endeavors in chemistry and mass spectrometry.

15.
Environ Health Perspect ; 132(2): 27006, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38349723

RESUMEN

BACKGROUND: Extraction of toxicological end points from primary sources is a central component of systematic reviews and human health risk assessments. To ensure optimal use of these data, consistent language should be used for end point descriptions. However, primary source language describing treatment-related end points can vary greatly, resulting in large labor efforts to manually standardize extractions before data are fit for use. OBJECTIVES: To minimize these labor efforts, we applied an augmented intelligence approach and developed automated tools to support standardization of extracted information via application of preexisting controlled vocabularies. METHODS: We created and applied a harmonized controlled vocabulary crosswalk, consisting of Unified Medical Language System (UMLS) codes, German Federal Institute for Risk Assessment (BfR) DevTox harmonized terms, and The Organization for Economic Co-operation and Development (OECD) end point vocabularies, to roughly 34,000 extractions from prenatal developmental toxicology studies conducted by the National Toxicology Program (NTP) and 6,400 extractions from European Chemicals Agency (ECHA) prenatal developmental toxicology studies, all recorded based on the original study report language. RESULTS: We automatically applied standardized controlled vocabulary terms to 75% of the NTP extracted end points and 57% of the ECHA extracted end points. Of all the standardized extracted end points, about half (51%) required manual review for potential extraneous matches or inaccuracies. Extracted end points that were not mapped to standardized terms tended to be too general or required human logic to find a good match. We estimate that this augmented intelligence approach saved >350 hours of manual effort and yielded valuable resources including a controlled vocabulary crosswalk, organized related terms lists, code for implementing an automated mapping workflow, and a computationally accessible dataset. DISCUSSION: Augmenting manual efforts with automation tools increased the efficiency of producing a findable, accessible, interoperable, and reusable (FAIR) dataset of regulatory guideline studies. This open-source approach can be readily applied to other legacy developmental toxicology datasets, and the code design is customizable for other study types. https://doi.org/10.1289/EHP13215.


Asunto(s)
Artículos Domésticos , Vocabulario Controlado , Humanos , Femenino , Embarazo , Revisiones Sistemáticas como Asunto , Inteligencia , Proyectos de Investigación
16.
Arch Toxicol ; 98(3): 735-754, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38244040

RESUMEN

The rapid progress of AI impacts diverse scientific disciplines, including toxicology, and has the potential to transform chemical safety evaluation. Toxicology has evolved from an empirical science focused on observing apical outcomes of chemical exposure, to a data-rich field ripe for AI integration. The volume, variety and velocity of toxicological data from legacy studies, literature, high-throughput assays, sensor technologies and omics approaches create opportunities but also complexities that AI can help address. In particular, machine learning is well suited to handle and integrate large, heterogeneous datasets that are both structured and unstructured-a key challenge in modern toxicology. AI methods like deep neural networks, large language models, and natural language processing have successfully predicted toxicity endpoints, analyzed high-throughput data, extracted facts from literature, and generated synthetic data. Beyond automating data capture, analysis, and prediction, AI techniques show promise for accelerating quantitative risk assessment by providing probabilistic outputs to capture uncertainties. AI also enables explanation methods to unravel mechanisms and increase trust in modeled predictions. However, issues like model interpretability, data biases, and transparency currently limit regulatory endorsement of AI. Multidisciplinary collaboration is needed to ensure development of interpretable, robust, and human-centered AI systems. Rather than just automating human tasks at scale, transformative AI can catalyze innovation in how evidence is gathered, data are generated, hypotheses are formed and tested, and tasks are performed to usher new paradigms in chemical safety assessment. Used judiciously, AI has immense potential to advance toxicology into a more predictive, mechanism-based, and evidence-integrated scientific discipline to better safeguard human and environmental wellbeing across diverse populations.


Asunto(s)
Inteligencia Artificial , Seguridad Química , Humanos , Redes Neurales de la Computación , Aprendizaje Automático , Catálisis
17.
BMC Bioinformatics ; 25(1): 4, 2024 Jan 02.
Artículo en Inglés | MEDLINE | ID: mdl-38166637

RESUMEN

BACKGROUND: Chemically induced skin sensitization, or allergic contact dermatitis, is a common occupational and public health issue. Regulatory authorities require an assessment of potential to cause skin sensitization for many chemical products. Defined approaches for skin sensitization (DASS) identify potential chemical skin sensitizers by integrating data from multiple non-animal tests based on human cells, molecular targets, and computational model predictions using standardized data interpretation procedures. While several DASS are internationally accepted by regulatory agencies, the data interpretation procedures vary in logical complexity, and manual application can be time-consuming or prone to error. RESULTS: We developed the DASS App, an open-source web application, to facilitate user application of three regulatory testing strategies for skin sensitization assessment: the Two-out-of-Three (2o3), the Integrated Testing Strategy (ITS), and the Key Event 3/1 Sequential Testing Strategy (KE 3/1 STS) without the need for software downloads or computational expertise. The application supports upload and analysis of user-provided data, includes steps to identify inconsistencies and formatting issues, and provides predictions in a downloadable format. CONCLUSION: This open-access web-based implementation of internationally harmonized regulatory guidelines for an important public health endpoint is designed to support broad user uptake and consistent, reproducible application. The DASS App is freely accessible via https://ntp.niehs.nih.gov/go/952311 and all scripts are available on GitHub ( https://github.com/NIEHS/DASS ).


Asunto(s)
Dermatitis Alérgica por Contacto , Aplicaciones Móviles , Animales , Humanos , Alternativas a las Pruebas en Animales/métodos , Piel , Dermatitis Alérgica por Contacto/etiología
18.
Regul Toxicol Pharmacol ; 147: 105564, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38182013

RESUMEN

In toxicology and regulatory testing, the use of animal methods has been both a cornerstone and a subject of intense debate. To continue this discourse a panel and audience representing scientists from various sectors and countries convened at a workshop held during the 12th World Congress on Alternatives and Animal Use in the Life Sciences (WC-12). The ensuing discussion focused on the scientific and ethical considerations surrounding the necessity and responsibility of defending the creation of new animal data in regulatory testing. The primary aim was to foster an open dialogue between the panel members and the audience while encouraging diverse perspectives on the responsibilities and obligations of various stakeholders (including industry, regulatory bodies, technology developers, research scientists, and animal welfare NGOs) in defending the development and subsequent utilization of new animal data. This workshop summary report captures the key elements from this critical dialogue and collective introspection. It describes the intersection of scientific progress and ethical responsibility as all sectors seek to accelerate the pace of 21st century predictive toxicology and new approach methodologies (NAMs) for the protection of human health and the environment.


Asunto(s)
Bienestar del Animal , Informe de Investigación , Animales , Humanos , Industrias , Medición de Riesgo , Alternativas a las Pruebas en Animales/métodos
19.
Cutan Ocul Toxicol ; 43(1): 58-68, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37905558

RESUMEN

Many sectors have seen complete replacement of the in vivo rabbit eye test with reproducible and relevant in vitro and ex vivo methods to assess the eye corrosion/irritation potential of chemicals. However, the in vivo rabbit eye test remains the standard test used for agrochemical formulations in some countries. Therefore, two defined approaches (DAs) for assessing conventional agrochemical formulations were developed, using the EpiOcularTM Eye Irritation Test (EIT) [Organisation for Economic Co-operation and Development (OECD) test guideline (TG) 492] and the Bovine Corneal Opacity and Permeability (OECD TG 437; BCOP) test with histopathology. Presented here are the results from testing 29 agrochemical formulations, which were evaluated against the United States Environmental Protection Agency's (EPA) pesticide classification system, and assessed using orthogonal validation, rather than direct concordance analysis with the historical in vivo rabbit eye data. Scientific confidence was established by evaluating the methods and testing results using an established framework that considers fitness for purpose, human biological relevance, technical characterisation, data integrity and transparency, and independent review. The in vitro and ex vivo methods used in the DAs were demonstrated to be as or more fit for purpose, reliable and relevant than the in vivo rabbit eye test. Overall, there is high scientific confidence in the use of these DAs for assessing the eye corrosion/irritation potential of agrochemical formulations.


Asunto(s)
Opacidad de la Córnea , Epitelio Corneal , Humanos , Animales , Bovinos , Conejos , Ojo , Epitelio Corneal/patología , Agroquímicos/toxicidad , Irritantes/toxicidad , Opacidad de la Córnea/inducido químicamente , Opacidad de la Córnea/patología , Permeabilidad , Alternativas a las Pruebas en Animales
20.
J Infect Dis ; 228(Suppl 5): S337-S354, 2023 10 03.
Artículo en Inglés | MEDLINE | ID: mdl-37669225

RESUMEN

The National Center for Advancing Translational Sciences (NCATS) Assay Guidance Manual (AGM) Workshop on 3D Tissue Models for Antiviral Drug Development, held virtually on 7-8 June 2022, provided comprehensive coverage of critical concepts intended to help scientists establish robust, reproducible, and scalable 3D tissue models to study viruses with pandemic potential. This workshop was organized by NCATS, the National Institute of Allergy and Infectious Diseases, and the Bill and Melinda Gates Foundation. During the workshop, scientific experts from academia, industry, and government provided an overview of 3D tissue models' utility and limitations, use of existing 3D tissue models for antiviral drug development, practical advice, best practices, and case studies about the application of available 3D tissue models to infectious disease modeling. This report includes a summary of each workshop session as well as a discussion of perspectives and challenges related to the use of 3D tissues in antiviral drug discovery.


Asunto(s)
Antivirales , Descubrimiento de Drogas , Antivirales/farmacología , Antivirales/uso terapéutico , Bioensayo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA