Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 3.500
Filter
1.
Environ Health Perspect ; 132(8): 85002, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39106156

ABSTRACT

BACKGROUND: The field of toxicology has witnessed substantial advancements in recent years, particularly with the adoption of new approach methodologies (NAMs) to understand and predict chemical toxicity. Class-based methods such as clustering and classification are key to NAMs development and application, aiding the understanding of hazard and risk concerns associated with groups of chemicals without additional laboratory work. Advances in computational chemistry, data generation and availability, and machine learning algorithms represent important opportunities for continued improvement of these techniques to optimize their utility for specific regulatory and research purposes. However, due to their intricacy, deep understanding and careful selection are imperative to align the adequate methods with their intended applications. OBJECTIVES: This commentary aims to deepen the understanding of class-based approaches by elucidating the pivotal role of chemical similarity (structural and biological) in clustering and classification approaches (CCAs). It addresses the dichotomy between general end point-agnostic similarity, often entailing unsupervised analysis, and end point-specific similarity necessitating supervised learning. The goal is to highlight the nuances of these approaches, their applications, and common misuses. DISCUSSION: Understanding similarity is pivotal in toxicological research involving CCAs. The effectiveness of these approaches depends on the right definition and measure of similarity, which varies based on context and objectives of the study. This choice is influenced by how chemical structures are represented and the respective labels indicating biological activity, if applicable. The distinction between unsupervised clustering and supervised classification methods is vital, requiring the use of end point-agnostic vs. end point-specific similarity definition. Separate use or combination of these methods requires careful consideration to prevent bias and ensure relevance for the goal of the study. Unsupervised methods use end point-agnostic similarity measures to uncover general structural patterns and relationships, aiding hypothesis generation and facilitating exploration of datasets without the need for predefined labels or explicit guidance. Conversely, supervised techniques demand end point-specific similarity to group chemicals into predefined classes or to train classification models, allowing accurate predictions for new chemicals. Misuse can arise when unsupervised methods are applied to end point-specific contexts, like analog selection in read-across, leading to erroneous conclusions. This commentary provides insights into the significance of similarity and its role in supervised classification and unsupervised clustering approaches. https://doi.org/10.1289/EHP14001.


Subject(s)
Machine Learning , Cluster Analysis , Unsupervised Machine Learning , Toxicology/methods , Algorithms
2.
Adv Sci (Weinh) ; 11(32): e2400389, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38923832

ABSTRACT

Hazard assessment is the first step in evaluating the potential adverse effects of chemicals. Traditionally, toxicological assessment has focused on the exposure, overlooking the impact of the exposed system on the observed toxicity. However, systems toxicology emphasizes how system properties significantly contribute to the observed response. Hence, systems theory states that interactions store more information than individual elements, leading to the adoption of network based models to represent complex systems in many fields of life sciences. Here, they develop a network-based approach to characterize toxicological responses in the context of a biological system, inferring biological system specific networks. They directly link molecular alterations to the adverse outcome pathway (AOP) framework, establishing direct connections between omics data and toxicologically relevant phenotypic events. They apply this framework to a dataset including 31 engineered nanomaterials with different physicochemical properties in two different in vitro and one in vivo models and demonstrate how the biological system is the driving force of the observed response. This work highlights the potential of network-based methods to significantly improve their understanding of toxicological mechanisms from a systems biology perspective and provides relevant considerations and future data-driven approaches for the hazard assessment of nanomaterials and other advanced materials.


Subject(s)
Adverse Outcome Pathways , Nanostructures , Nanostructures/toxicity , Humans , Systems Biology/methods , Animals , Toxicology/methods
3.
Toxicol Sci ; 200(2): 277-286, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-38851876

ABSTRACT

A SEND toxicology data transformation, harmonization, and analysis platform were created to improve the identification of unique findings related to the intended target, species, and duration of dosing using data from multiple studies. The lack of a standardized digital format for data analysis had impeded large-scale analysis of in vivo toxicology studies. The CDISC SEND standard enables the analysis of data from multiple studies performed by different laboratories. This work describes methods to analyze data and automate cross-study analysis of toxicology studies. Cross-study analysis can be used to understand a single compound's toxicity profile across all studies performed and/or to evaluate on-target versus off-target toxicity for multiple compounds intended for the same pharmacological target. This work involved development of data harmonization/transformation strategies to enable cross-study analysis of both numerical and categorical SEND data. Four de-identified SEND datasets from the BioCelerate database were used for the analyses. Toxicity profiles for key organ systems were developed for liver, kidney, male reproductive tract, endocrine system, and hematopoietic system using SEND domains. A cross-study analysis dashboard with a built-in user-defined scoring system was created for custom analyses, including visualizations to evaluate data at the organ system level and drill down into individual animal data. This data analysis provides the tools for scientists to compare toxicity profiles across multiple studies using SEND. A cross-study analysis of 2 different compounds intended for the same pharmacological target is described and the analyses indicate potential on-target effects to liver, kidney, and hematopoietic systems.


Subject(s)
Toxicity Tests , Animals , Toxicity Tests/methods , Databases, Factual , Toxicology/methods , Humans , Male
4.
Regul Toxicol Pharmacol ; 151: 105663, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38871173

ABSTRACT

As the United States and the European Union continue their steady march towards the acceptance of new approach methodologies (NAMs), we need to ensure that the available tools are fit for purpose. Critics will be well-positioned to caution against NAMs acceptance and adoption if the tools turn out to be inadequate. In this paper, we focus on Quantitative Structure Activity-Relationship (QSAR) models and highlight how the training database affects quality and performance of these models. Our analysis goes to the point of asking, "are the endpoints extracted from the experimental studies in the database trustworthy, or are they false negatives/positives themselves?" We also discuss the impacts of chemistry on QSAR models, including issues with 2-D structure analyses when dealing with isomers, metabolism, and toxicokinetics. We close our analysis with a discussion of challenges associated with translational toxicology, specifically the lack of adverse outcome pathways/adverse outcome pathway networks (AOPs/AOPNs) for many higher tier endpoints. We recognize that it takes a collaborate effort to build better and higher quality QSAR models especially for higher tier toxicological endpoints. Hence, it is critical to bring toxicologists, statisticians, and machine learning specialists together to discuss and solve these challenges to get relevant predictions.


Subject(s)
Databases, Factual , Quantitative Structure-Activity Relationship , Humans , Animals , Adverse Outcome Pathways , Toxicology/methods , Endpoint Determination
5.
Toxicol Pathol ; 52(2-3): 123-137, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38888280

ABSTRACT

Complex in vitro models (CIVMs) offer the potential to increase the clinical relevance of preclinical efficacy and toxicity assessments and reduce the reliance on animals in drug development. The European Society of Toxicologic Pathology (ESTP) and Society for Toxicologic Pathology (STP) are collaborating to highlight the role of pathologists in the development and use of CIVM. Pathologists are trained in comparative animal medicine which enhances their understanding of mechanisms of human and animal diseases, thus allowing them to bridge between animal models and humans. This skill set is important for CIVM development, validation, and data interpretation. Ideally, diverse teams of scientists, including engineers, biologists, pathologists, and others, should collaboratively develop and characterize novel CIVM, and collectively assess their precise use cases (context of use). Implementing a morphological CIVM evaluation should be essential in this process. This requires robust histological technique workflows, image analysis techniques, and needs correlation with translational biomarkers. In this review, we demonstrate how such tissue technologies and analytics support the development and use of CIVM for drug efficacy and safety evaluations. We encourage the scientific community to explore similar options for their projects and to engage with health authorities on the use of CIVM in benefit-risk assessment.


Subject(s)
Pathologists , Pathology , Toxicology , Humans , Toxicology/methods , Animals , Bioengineering , Toxicity Tests , Drug Evaluation, Preclinical , In Vitro Techniques
6.
Food Chem Toxicol ; 190: 114809, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38857761

ABSTRACT

This Special Issue contains articles on applications of various new approach methodologies (NAMs) in the field of toxicology and risk assessment. These NAMs include in vitro high-throughput screening, quantitative structure-activity relationship (QSAR) modeling, physiologically based pharmacokinetic (PBPK) modeling, network toxicology analysis, molecular docking simulation, omics, machine learning, deep learning, and "template-and-anchor" multiscale computational modeling. These in vitro and in silico approaches complement each other and can be integrated together to support different applications of toxicology, including food safety assessment, dietary exposure assessment, chemical toxicity potency screening and ranking, chemical toxicity prediction, chemical toxicokinetic simulation, and to investigate the potential mechanisms of toxicities, as introduced further in selected articles in this Special Issue.


Subject(s)
Food Safety , Machine Learning , Risk Assessment/methods , Humans , Quantitative Structure-Activity Relationship , Toxicokinetics , Toxicology/methods
7.
Toxicol Pathol ; 52(2-3): 138-148, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38840532

ABSTRACT

In December 2021, the United States Food and Drug Administration (FDA) issued the final guidance for industry titled Pathology Peer Review in Nonclinical Toxicology Studies: Questions and Answers. The stated purpose of the FDA guidance is to provide information to sponsors, applicants, and nonclinical laboratory personnel regarding the management and conduct of histopathology peer review as part of nonclinical toxicology studies conducted in compliance with good laboratory practice (GLP) regulations. On behalf of and in collaboration with global societies of toxicologic pathology and the Society of Quality Assurance, the Scientific and Regulatory Policy Committee (SRPC) of the Society of Toxicologic Pathology (STP) initiated a review of this FDA guidance. The STP has previously published multiple papers related to the scientific conduct of a pathology peer review of nonclinical toxicology studies and appropriate documentation practices. The objectives of this review are to provide an in-depth analysis and summary interpretation of the FDA recommendations and share considerations for the conduct of pathology peer review in nonclinical toxicology studies that claim compliance to GLP regulations. In general, this working group is in agreement with the recommendations from the FDA guidance that has added clear expectations for pathology peer review preparation, conduct, and documentation.


Subject(s)
Pathology , Peer Review , Toxicology , United States Food and Drug Administration , United States , Toxicology/standards , Toxicology/legislation & jurisprudence , Toxicology/methods , Peer Review/standards , Pathology/standards , Guidelines as Topic , Animals , Toxicity Tests/standards , Toxicity Tests/methods
8.
Toxicol Sci ; 200(2): 228-234, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-38713198

ABSTRACT

Arguably the most famous principle of toxicology is "The dose makes the poison" formulated by Paracelsus in the 16th century. Application of the Paracelsus's principle to mechanistic toxicology may be challenging as one compound may affect many molecular pathways at different doses with different and often nonlinear dose-response relationships. As a result, many mechanistic studies of environmental and occupational compounds use high doses of xenobiotics motivated by the need to see a clear signal indicating disruption of a particular molecular pathway. This approach ignores the possibility that the same xenobiotic may affect different molecular mechanism(s) at much lower doses relevant to human exposures. To amend mechanistic toxicology with a simple and concise guiding principle, I suggest recontextualization of Paracelsus's following its letter and spirit: "The dose disrupts the pathway". Justification of this statement includes observations that many environmental and occupational xenobiotics affect a broad range of molecular cascades, that most molecular pathways are sensitive to chemical exposures, and that different molecular pathways are sensitive to different doses of a chemical compound. I suggest that this statement may become a useful guidance and educational tool in a range of toxicological applications, including experimental design, comparative analysis of mechanistic hypotheses, evaluation of the quality of toxicological studies, and risk assessment.


Subject(s)
Dose-Response Relationship, Drug , Toxicology , Xenobiotics , Xenobiotics/toxicity , Toxicology/methods , Humans , Animals , Signal Transduction/drug effects , Risk Assessment
9.
Arch Toxicol ; 98(8): 2309-2330, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38806717

ABSTRACT

A mechanism exploration is an important part of toxicological studies. However, traditional cell and animal models can no longer meet the current needs for in-depth studies of toxicological mechanisms. The three-dimensional (3D) organoid derived from human embryonic stem cells (hESC) or induced pluripotent stem cells (hiPSC) is an ideal experimental model for the study of toxicological effects and mechanisms, which further recapitulates the human tissue microenvironment and provides a reliable method for studying complex cell-cell interactions. This article provides a comprehensive overview of the state of the 3D organoid technology in toxicological studies, including a bibliometric analysis of the existing literature and an exploration of the latest advances in toxicological mechanisms. The use of 3D organoids in toxicology research is growing rapidly, with applications in disease modeling, organ-on-chips, and drug toxicity screening being emphasized, but academic communications among countries/regions, institutions, and research scholars need to be further strengthened. Attempts to study the toxicological mechanisms of exogenous chemicals such as heavy metals, nanoparticles, drugs and organic pollutants are also increasing. It can be expected that 3D organoids can be better applied to the safety evaluation of exogenous chemicals by establishing a standardized methodology.


Subject(s)
Bibliometrics , Induced Pluripotent Stem Cells , Organoids , Toxicity Tests , Organoids/drug effects , Humans , Toxicity Tests/methods , Induced Pluripotent Stem Cells/drug effects , Animals , Toxicology/methods , Human Embryonic Stem Cells , Cell Culture Techniques, Three Dimensional/methods
10.
Toxicol Ind Health ; 40(9-10): 556-558, 2024 Oct.
Article in English | MEDLINE | ID: mdl-38821533

ABSTRACT

The objective of establishing occupational exposure limits (OELs) is to utilize them as a risk management tool, ensuring the protection of workers' health and well-being from hazardous substances present in the workplace. To regulate and develop an OEL, it is essential to conduct toxicological studies on both animals and humans, to determine the dose-response relationship for each chemical compound, and to determine whether the dose-response relationship is linear or non-linear. Because the OELs suggested by different organizations or countries are just the result of their scientific methods, knowledge, and judgment, this does not confirm the applicability in other countries. Therefore, it is not scientific and logical to imitate the permissible limits recommended in Western countries. In most Western Asian nations, there is a significant difference in the suggested OEL levels between the reference organizations, and in assessing and managing a specific situation's risk, using any of the proposed OELs can lead to contradictory results. Suggestions for the development and improvement of the basics of determining the OELs for chemical pollution in West Asian countries have been made.


Subject(s)
Hazardous Substances , Occupational Exposure , Iran , Humans , Occupational Exposure/prevention & control , Hazardous Substances/toxicity , Risk Assessment , Toxicology/standards , Toxicology/methods , Occupational Health/standards , Workplace , Animals , Dose-Response Relationship, Drug
12.
Regul Toxicol Pharmacol ; 150: 105632, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38679316

ABSTRACT

The replacement of a proportion of concurrent controls by virtual controls in nonclinical safety studies has gained traction over the last few years. This is supported by foundational work, encouraged by regulators, and aligned with societal expectations regarding the use of animals in research. This paper provides an overview of the points to consider for any institution on the verge of implementing this concept, with emphasis given on database creation, risks, and discipline-specific perspectives.


Subject(s)
Toxicity Tests , Toxicology , Animals , Toxicology/methods , Toxicity Tests/methods , Humans , Databases, Factual , Risk Assessment
14.
Int J Toxicol ; 43(4): 377-386, 2024.
Article in English | MEDLINE | ID: mdl-38606470

ABSTRACT

The inclusion of recovery animals in nonclinical safety studies that support clinical trials is undertaken with a wide diversity of approaches even while operating under harmonized regulatory guidance. While empirical evaluation of reversibility may enhance the overall nonclinical risk assessment, there are often overlooked opportunities to reduce recovery animal use by leveraging robust scientific and regulatory information. In the past, there were several attempts to benchmark recovery practices; however, recommendations have not been consistently applied across the pharmaceutical industry. A working group (WG) sponsored by the 3Rs Translational and Predictive Sciences Leadership Group of the IQ Consortium conducted a survey of current industry practice related to the evaluation of reversibility/recovery in repeat dose toxicity studies. Discussion among the WG representatives included member company strategies and case studies that highlight challenges and opportunities for continuous refinements in the use of recovery animals. The case studies presented in this paper demonstrate increasing alignment with the Society of Toxicologic Pathology recommendations (2013) towards (1) excluding recovery phase cohorts by default (include only when scientifically justified), (2) minimizing the number of recovery groups (e.g., control and one dose level), and (3) excluding controls in the recovery cohort by leveraging external and/or dosing phase data. Recovery group exclusion and decisions regarding the timing of reversibility evaluation may be driven by indication, modality, and/or other scientific or strategic factors using a weight of evidence approach. The results and recommendations discussed present opportunities to further decrease animal use without impacting the quality of human risk assessment.


Subject(s)
Toxicity Tests , Animals , Risk Assessment , Toxicology/standards , Toxicology/methods , Humans
15.
Arch Toxicol ; 98(7): 2047-2063, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38689008

ABSTRACT

The ongoing transition from chemical hazard and risk assessment based on animal studies to assessment relying mostly on non-animal data, requires a multitude of novel experimental methods, and this means that guidance on the validation and standardisation of test methods intended for international applicability and acceptance, needs to be updated. These so-called new approach methodologies (NAMs) must be applicable to the chemical regulatory domain and provide reliable data which are relevant to hazard and risk assessment. Confidence in and use of NAMs will depend on their reliability and relevance, and both are thoroughly assessed by validation. Validation is, however, a time- and resource-demanding process. As updates on validation guidance are conducted, the valuable components must be kept: Reliable data are and will remain fundamental. In 2016, the scientific community was made aware of the general crisis in scientific reproducibility-validated methods must not fall into this. In this commentary, we emphasize the central importance of ring trials in the validation of experimental methods. Ring trials are sometimes considered to be a major hold-up with little value added to the validation. Here, we clarify that ring trials are indispensable to demonstrate the robustness and reproducibility of a new method. Further, that methods do fail in method transfer and ring trials due to different stumbling blocks, but these provide learnings to ensure the robustness of new methods. At the same time, we identify what it would take to perform ring trials more efficiently, and how ring trials fit into the much-needed update to the guidance on the validation of NAMs.


Subject(s)
Toxicology , Reproducibility of Results , Risk Assessment/methods , Animals , Toxicology/methods , Toxicology/standards , Toxicity Tests/methods , Humans , Validation Studies as Topic , Research Design/standards , Animal Testing Alternatives/methods
16.
Molecules ; 29(8)2024 Apr 17.
Article in English | MEDLINE | ID: mdl-38675645

ABSTRACT

In the realm of predictive toxicology for small molecules, the applicability domain of QSAR models is often limited by the coverage of the chemical space in the training set. Consequently, classical models fail to provide reliable predictions for wide classes of molecules. However, the emergence of innovative data collection methods such as intensive hackathons have promise to quickly expand the available chemical space for model construction. Combined with algorithmic refinement methods, these tools can address the challenges of toxicity prediction, enhancing both the robustness and applicability of the corresponding models. This study aimed to investigate the roles of gradient boosting and strategic data aggregation in enhancing the predictivity ability of models for the toxicity of small organic molecules. We focused on evaluating the impact of incorporating fragment features and expanding the chemical space, facilitated by a comprehensive dataset procured in an open hackathon. We used gradient boosting techniques, accounting for critical features such as the structural fragments or functional groups often associated with manifestations of toxicity.


Subject(s)
Algorithms , Quantitative Structure-Activity Relationship , Toxicology/methods , Humans
18.
Arch Toxicol ; 98(6): 1727-1740, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38555325

ABSTRACT

The first step in the hazard or risk assessment of chemicals should be to formulate the problem through a systematic and iterative process aimed at identifying and defining factors critical to the assessment. However, no general agreement exists on what components an in silico toxicology problem formulation (PF) should include. The present work aims to develop a PF framework relevant to the application of in silico models for chemical toxicity prediction. We modified and applied a PF framework from the general risk assessment literature to peer reviewed papers describing PFs associated with in silico toxicology models. Important gaps between the general risk assessment literature and the analyzed PF literature associated with in silico toxicology methods were identified. While the former emphasizes the need for PFs to address higher-level conceptual questions, the latter does not. There is also little consistency in the latter regarding the PF components addressed, reinforcing the need for a PF framework that enable users of in silico toxicology models to answer the central conceptual questions aimed at defining components critical to the model application. Using the developed framework, we highlight potential areas of uncertainty manifestation in in silico toxicology PF in instances where particular components are missing or implicitly described. The framework represents the next step in standardizing in silico toxicology PF component. The framework can also be used to improve the understanding of how uncertainty is apparent in an in silico toxicology PF, thus facilitating ways to address uncertainty.


Subject(s)
Computer Simulation , Toxicology , Risk Assessment/methods , Toxicology/methods , Humans , Uncertainty , Animals , Toxicity Tests/methods
19.
Clin Toxicol (Phila) ; 62(3): 164-167, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38525861

ABSTRACT

BACKGROUND: Paracetamol overdose is the most common cause of acute liver failure in the United States. Administration of acetylcysteine is the standard of care for this intoxication. Laboratory values and clinical criteria are used to guide treatment duration, but decision-making is nuanced and often complex and difficult. The purpose of this study was to evaluate the effect of the introduction of a medical toxicology service on the rate of errors in the management of paracetamol overdose. METHODS: This was a single center, retrospective, cohort evaluation. Patients with suspected paracetamol overdose were divided into two groups: those attending in the 1 year period before and those in the 1 year after the introduction of the medical toxicology service. The primary outcome was the frequency of deviations from the established management of paracetamol intoxication, using international guidelines as a reference. RESULTS: Fifty-four patients were eligible for the study (20 pre-toxicology-service, 34 post-toxicology-service). The frequency of incorrect therapeutic decisions was significantly lower in the post-toxicology service implementation versus the pre-implementation group (P = 0.005). DISCUSSION: Our study suggests that a medical toxicology service reduces the incidence of management errors, including the number of missed acetylcysteine doses in patients with paracetamol overdose. The limitations include the retrospective study design and that the study was conducted at a single center, which may limit generalizability. CONCLUSIONS: The implementation of a medical toxicology service was associated with a decrease in the number of errors in the management of paracetamol overdose.


Subject(s)
Acetaminophen , Acetylcysteine , Drug Overdose , Tertiary Care Centers , Humans , Acetaminophen/poisoning , Retrospective Studies , Drug Overdose/therapy , Drug Overdose/drug therapy , Female , Male , Adult , Acetylcysteine/therapeutic use , Middle Aged , Analgesics, Non-Narcotic/poisoning , Antidotes/therapeutic use , Toxicology/methods , Young Adult
20.
Toxicol Sci ; 199(1): 29-39, 2024 Apr 29.
Article in English | MEDLINE | ID: mdl-38374304

ABSTRACT

To avoid adverse events in humans, toxicity studies in nonclinical species have been the foundation of safety evaluation in the pharmaceutical industry. However, it is recognized that working with animals in research is a privilege, and conscientious use should always respect the 3Rs: replacement, reduction, and refinement. In the wake of the shortages in routine nonrodent species and considering that nonanimal methods are not yet sufficiently mature, the value of the rabbit as a nonrodent species is worth exploring. Historically used in vaccine, cosmetic, and medical device testing, the rabbit is seldom used today as a second species in pharmaceutical development, except for embryo-fetal development studies, ophthalmic therapeutics, some medical devices and implants, and vaccines. Although several factors affect the decision of species selection, including pharmacological relevance, pharmacokinetics, and ADME considerations, there are no perfect animal models. In this forum article, we bring together experts from veterinary medicine, industry, contract research organizations, and government to explore the pros and cons, residual concerns, and data gaps regarding the use of the rabbit for general toxicity testing.


Subject(s)
Toxicity Tests , Rabbits , Animals , Species Specificity , Models, Animal , Animal Testing Alternatives , Humans , Toxicology/methods
SELECTION OF CITATIONS
SEARCH DETAIL