Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 70
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Annu Rev Pharmacol Toxicol ; 64: 191-209, 2024 Jan 23.
Article in English | MEDLINE | ID: mdl-37506331

ABSTRACT

Traditionally, chemical toxicity is determined by in vivo animal studies, which are low throughput, expensive, and sometimes fail to predict compound toxicity in humans. Due to the increasing number of chemicals in use and the high rate of drug candidate failure due to toxicity, it is imperative to develop in vitro, high-throughput screening methods to determine toxicity. The Tox21 program, a unique research consortium of federal public health agencies, was established to address and identify toxicity concerns in a high-throughput, concentration-responsive manner using a battery of in vitro assays. In this article, we review the advancements in high-throughput robotic screening methodology and informatics processes to enable the generation of toxicological data, and their impact on the field; further, we discuss the future of assessing environmental toxicity utilizing efficient and scalable methods that better represent the corresponding biological and toxicodynamic processes in humans.


Subject(s)
High-Throughput Screening Assays , Toxicology , Animals , Humans , High-Throughput Screening Assays/methods , Toxicology/methods
2.
Altern Lab Anim ; : 2611929241266472, 2024 Jul 24.
Article in English | MEDLINE | ID: mdl-39044652

ABSTRACT

The scientific and ethical issues associated with the use of animal-derived antibodies in research can be overcome by the use of animal-free, sequence-defined recombinant antibodies, whose benefits are well documented. Here, we describe progress made following a 2019 expert meeting focused on improving the quality and reproducibility of biomedical research by accelerating the production and use of animal-free recombinant antibodies in the USA. In the five intervening years since the meeting, participants have established multifaceted initiatives to tackle the next steps outlined during the meeting. These initiatives include: prioritising the replacement of ascites-derived and polyclonal antibodies; distributing educational materials describing recombinant antibodies; fostering public-private partnerships to increase access to recombinant antibodies; and increasing the availability of funding for recombinant antibody development. Given the widescale use of antibodies across scientific disciplines, a transition to modern antibody production methods relies on a commitment from government agencies, universities, industry and funding organisations, to initiatives such as those outlined here.

3.
Crit Rev Toxicol ; 53(7): 385-411, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37646804

ABSTRACT

Chemical regulatory authorities around the world require systemic toxicity data from acute exposures via the oral, dermal, and inhalation routes for human health risk assessment. To identify opportunities for regulatory uses of non-animal replacements for these tests, we reviewed acute systemic toxicity testing requirements for jurisdictions that participate in the International Cooperation on Alternative Test Methods (ICATM): Brazil, Canada, China, the European Union, Japan, South Korea, Taiwan, and the USA. The chemical sectors included in our review of each jurisdiction were cosmetics, consumer products, industrial chemicals, pharmaceuticals, medical devices, and pesticides. We found acute systemic toxicity data were most often required for hazard assessment, classification, and labeling, and to a lesser extent quantitative risk assessment. Where animal methods were required, animal reduction methods were typically recommended. For many jurisdictions and chemical sectors, non-animal alternatives are not accepted, but several jurisdictions provide guidance to support the use of test waivers to reduce animal use for specific applications. An understanding of international regulatory requirements for acute systemic toxicity testing will inform ICATM's strategy for the development, acceptance, and implementation of non-animal alternatives to assess the health hazards and risks associated with acute toxicity.

4.
Chem Res Toxicol ; 35(6): 992-1000, 2022 06 20.
Article in English | MEDLINE | ID: mdl-35549170

ABSTRACT

Computational modeling grounded in reliable experimental data can help design effective non-animal approaches to predict the eye irritation and corrosion potential of chemicals. The National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM) has compiled and curated a database of in vivo eye irritation studies from the scientific literature and from stakeholder-provided data. The database contains 810 annotated records of 593 unique substances, including mixtures, categorized according to UN GHS and US EPA hazard classifications. This study reports a set of in silico models to predict EPA and GHS hazard classifications for chemicals and mixtures, accounting for purity by setting thresholds of 100% and 10% concentration. We used two approaches to predict classification of mixtures: conventional and mixture-based. Conventional models evaluated substances based on the chemical structure of its major component. These models achieved balanced accuracy in the range of 68-80% and 87-96% for the 100% and 10% test concentration thresholds, respectively. Mixture-based models, which accounted for all known components in the substance by weighted feature averaging, showed similar or slightly higher accuracy of 72-79% and 89-94% for the respective thresholds. We also noted a strong trend between the pH feature metric calculated for each substance and its activity. Across all the models, the calculated pH of inactive substances was within one log10 unit of neutral pH, on average, while for active substances, pH varied from neutral by at least 2 log10 units. This pH dependency is especially important for complex mixtures. Additional evaluation on an external test set of 673 substances obtained from ECHA dossiers achieved balanced accuracies of 64-71%, which suggests that these models can be useful in screening compounds for ocular irritation potential. Negative predictive value was particularly high and indicates the potential application of these models in a bottom-up approach to identify nonirritant substances.


Subject(s)
Irritants , Toxic Optic Neuropathy , Animal Testing Alternatives , Animals , Computer Simulation , Eye , Humans , Irritants/toxicity , United States , United States Environmental Protection Agency
5.
Arch Toxicol ; 96(11): 2865-2879, 2022 11.
Article in English | MEDLINE | ID: mdl-35987941

ABSTRACT

Robust and efficient processes are needed to establish scientific confidence in new approach methodologies (NAMs) if they are to be considered for regulatory applications. NAMs need to be fit for purpose, reliable and, for the assessment of human health effects, provide information relevant to human biology. They must also be independently reviewed and transparently communicated. Ideally, NAM developers should communicate with stakeholders such as regulators and industry to identify the question(s), and specified purpose that the NAM is intended to address, and the context in which it will be used. Assessment of the biological relevance of the NAM should focus on its alignment with human biology, mechanistic understanding, and ability to provide information that leads to health protective decisions, rather than solely comparing NAM-based chemical testing results with those from traditional animal test methods. However, when NAM results are compared to historical animal test results, the variability observed within animal test method results should be used to inform performance benchmarks. Building on previous efforts, this paper proposes a framework comprising five essential elements to establish scientific confidence in NAMs for regulatory use: fitness for purpose, human biological relevance, technical characterization, data integrity and transparency, and independent review. Universal uptake of this framework would facilitate the timely development and use of NAMs by the international community. While this paper focuses on NAMs for assessing human health effects of pesticides and industrial chemicals, many of the suggested elements are expected to apply to other types of chemicals and to ecotoxicological effect assessments.


Subject(s)
Ecotoxicology , Pesticides , Animals , Humans , Research Design , Risk Assessment
6.
Biologicals ; 78: 36-44, 2022 Jul.
Article in English | MEDLINE | ID: mdl-35753962

ABSTRACT

The U.S. Department of Agriculture (USDA) regulates the potency testing of leptospirosis vaccines, which are administered to animals to protect against infection by Leptospira bacteria. Despite the long-term availability of in vitro test methods for assessing batch potency, the use of hamsters in lethal in vivo batch potency testing persists to varying degrees across leptospirosis vaccine manufacturers. For all manufacturers of these products, data collected from public USDA records show an estimated 40% decline in the annual use of hamsters from 2014 to 2020, with an estimated 55% decrease in the number of hamsters expected to have been used in leptospirosis vaccine potency tests (i.e., those in USDA Category E). An estimated 49,000 hamsters were used in 2020, with about 15,000 hamsters in Category E specifically. Based on this assessment, additional efforts are needed to fully implement in vitro batch potency testing as a replacement for the in vivo batch potency test. We propose steps that can be taken collaboratively by the USDA Center for Veterinary Biologics (CVB), manufacturers of leptospirosis vaccines, government agencies, and non-governmental organizations to accelerate broader use of the in vitro approach.


Subject(s)
Leptospira , Leptospirosis , Animals , Bacterial Vaccines , Biological Assay , Cricetinae , Leptospirosis/prevention & control , Leptospirosis/veterinary , United States , Vaccine Potency
7.
Regul Toxicol Pharmacol ; 131: 105160, 2022 Jun.
Article in English | MEDLINE | ID: mdl-35311659

ABSTRACT

Rodent cancer bioassays have been long-required studies for regulatory assessment of human cancer hazard and risk. These studies use hundreds of animals, are resource intensive, and certain aspects of these studies have limited human relevance. The past 10 years have seen an exponential growth of new technologies with the potential to effectively evaluate human cancer hazard and risk while reducing, refining, or replacing animal use. To streamline and facilitate uptake of new technologies, a workgroup comprised of scientists from government, academia, non-governmental organizations, and industry stakeholders developed a framework for waiver rationales of rodent cancer bioassays for consideration in agrochemical safety assessment. The workgroup used an iterative approach, incorporating regulatory agency feedback, and identifying critical information to be considered in a risk assessment-based weight of evidence determination of the need for rodent cancer bioassays. The reporting framework described herein was developed to support a chronic toxicity and carcinogenicity study waiver rationale, which includes information on use pattern(s), exposure scenario(s), pesticidal mode-of-action, physicochemical properties, metabolism, toxicokinetics, toxicological data including mechanistic data, and chemical read-across from similar registered pesticides. The framework could also be applied to endpoints other than chronic toxicity and carcinogenicity, and for chemicals other than agrochemicals.


Subject(s)
Neoplasms , Pesticides , Agrochemicals/toxicity , Animals , Biological Assay , Carcinogenicity Tests , Pesticides/toxicity , Risk Assessment , Rodentia
8.
Crit Rev Toxicol ; 51(8): 653-694, 2021 09.
Article in English | MEDLINE | ID: mdl-35239444

ABSTRACT

The Toxicology Forum convened an international state-of-the-science workshop Assessing Chemical Carcinogenicity: Hazard Identification, Classification, and Risk Assessment in December 2020. Challenges related to assessing chemical carcinogenicity were organized under the topics of (1) problem formulation; (2) modes-of-action; (3) dose-response assessment; and (4) the use of new approach methodologies (NAMs). Key topics included the mechanisms of genotoxic and non-genotoxic carcinogenicity and how these in conjunction with consideration of exposure conditions might inform dose-response assessments and an overall risk assessment; approaches to evaluate the human relevance of modes-of-action observed in rodent studies; and the characterization of uncertainties. While the scientific limitations of the traditional rodent chronic bioassay were widely acknowledged, knowledge gaps that need to be overcome to facilitate the further development and uptake of NAMs were also identified. Since one single NAM is unlikely to replace the bioassay, activities to combine NAMs into integrated approaches for testing and assessment, or preferably into defined approaches for testing and assessment that include data interpretation procedures, were identified as urgent research needs. In addition, adverse outcome pathway networks can provide a framework for organizing the available evidence/data for assessing chemical carcinogenicity. Since a formally accepted decision tree to guide use of the best and most current science to advance carcinogenicity risk assessment is currently unavailable, a Decision Matrix for carcinogenicity assessment could be useful. The workshop organizers developed and presented a decision matrix to be considered within a carcinogenicity hazard and risk assessment that is offered in tabular form.


Subject(s)
Carcinogenesis , Carcinogens , Biological Assay , Carcinogenicity Tests/methods , Carcinogens/toxicity , Humans , Risk Assessment/methods
9.
Regul Toxicol Pharmacol ; 112: 104592, 2020 Apr.
Article in English | MEDLINE | ID: mdl-32017962

ABSTRACT

The need to develop new tools and increase capacity to test pharmaceuticals and other chemicals for potential adverse impacts on human health and the environment is an active area of development. Much of this activity was sparked by two reports from the US National Research Council (NRC) of the National Academies of Sciences, Toxicity Testing in the Twenty-first Century: A Vision and a Strategy (2007) and Science and Decisions: Advancing Risk Assessment (2009), both of which advocated for "science-informed decision-making" in the field of human health risk assessment. The response to these challenges for a "paradigm shift" toward using new approach methodologies (NAMS) for safety assessment has resulted in an explosion of initiatives by numerous organizations, but, for the most part, these have been carried out independently and are not coordinated in any meaningful way. To help remedy this situation, a framework that presents a consistent set of criteria, universal across initiatives, to evaluate a NAM's fit-for-purpose was developed by a multi-stakeholder group of industry, academic, and regulatory experts. The goal of this framework is to support greater consistency across existing and future initiatives by providing a structure to collect relevant information to build confidence that will accelerate, facilitate and encourage development of new NAMs that can ultimately be used within the appropriate regulatory contexts. In addition, this framework provides a systematic approach to evaluate the currently-available NAMs and determine their suitability for potential regulatory application. This 3-step evaluation framework along with the demonstrated application with case studies, will help build confidence in the scientific understanding of these methods and their value for chemical assessment and regulatory decision-making.


Subject(s)
Decision Making , Safety Management , Humans , Risk Assessment , Toxicity Tests
10.
Regul Toxicol Pharmacol ; 113: 104624, 2020 Jun.
Article in English | MEDLINE | ID: mdl-32126256

ABSTRACT

An international expert working group representing 37 organisations (pharmaceutical/biotechnology companies, contract research organisations, academic institutions and regulatory bodies) collaborated in a data sharing exercise to evaluate the utility of two species within regulatory general toxicology studies. Anonymised data on 172 drug candidates (92 small molecules, 46 monoclonal antibodies, 15 recombinant proteins, 13 synthetic peptides and 6 antibody-drug conjugates) were submitted by 18 organisations. The use of one or two species across molecule types, the frequency for reduction to a single species within the package of general toxicology studies, and a comparison of target organ toxicities identified in each species in both short and longer-term studies were determined. Reduction to a single species for longer-term toxicity studies, as used for the development of biologicals (ICHS6(R1) guideline) was only applied for 8/133 drug candidates, but might have been possible for more, regardless of drug modality, as similar target organ toxicity profiles were identified in the short-term studies. However, definition and harmonisation around the criteria for similarity of toxicity profiles is needed to enable wider consideration of these principles. Analysis of a more robust dataset would be required to provide clear, evidence-based recommendations for expansion of these principles to small molecules or other modalities where two species toxicity testing is currently recommended.


Subject(s)
Drug Development , Drug Evaluation, Preclinical/adverse effects , Toxicity Tests , Animals , Databases, Factual , Humans , Risk Assessment
11.
Cutan Ocul Toxicol ; 39(3): 180-192, 2020 Sep.
Article in English | MEDLINE | ID: mdl-32586141

ABSTRACT

PURPOSE: OptiSafe is an in chemico test method that identifies potential eye irritants based on macromolecular damage following test chemical exposure. The OptiSafe protocol includes a prescreen assessment that identifies test chemicals that are outside the applicability domain of the test method and thus determines the optimal procedure. We assessed the usefulness and limitations of the OptiSafe test method for identifying chemicals not requiring classification for ocular irritation (i.e. bottom-up testing strategy). MATERIALS AND METHODS: Seventeen chemicals were selected by the lead laboratory and tested as an independent study. Ninety-five unique coded chemicals were selected by a validation management team to assess the intra- and interlaboratory reproducibility and accuracy of OptiSafe in a multilaboratory, three-phased validation study. Three laboratories (lead laboratory and two naïve laboratories) evaluated 35 chemicals, with the remaining 60 chemicals evaluated by the lead laboratory only. Test method performance was assessed by comparing classifications based on OptiSafe results to classifications based on available retrospective in vivo data, using both the EPA and GHS eye irritation hazard classification systems. No prospective in vivo testing was conducted. RESULTS: Phase I testing of five chemicals showed that the method could be transferred to naïve laboratories; within-lab reproducibility ranged from 93% to 100% for both classification systems. Thirty coded chemicals were evaluated in Phase II of the validation study to demonstrate both intra- and interlaboratory reproducibility. Intralaboratory reproducibility for both EPA and GHS classification systems for Phase II of the validation study ranged from 93% to 99%, while interlaboratory reproducibility was 91% for both systems. Test method accuracy for the EPA and GHS classification systems based on results from individual laboratories ranged from 82% to 88% and from 78% to 88%, respectively, among the three laboratories; false negative rates ranged from 0% to 7% (EPA) and 0% to 15% (GHS). When results across all three laboratories were combined based on the majority classification, test method accuracy and false negative rates were 89% and 0%, respectively, for both classification systems, while false positive rates were 25% and 23% for the EPA and GHS classification systems, respectively. Validation study Phase III evaluation of an additional 60 chemicals by the lead laboratory provided a comprehensive assessment of test method accuracy and defined the applicability domain of the method. Based on chemicals tested in Phases II and III by the lead laboratory, test method accuracy was 83% and 79% for the EPA and GHS classification systems, respectively; false negative rates were 4% (EPA) and 0% (GHS); and false positive rates were 40% (EPA) and 42% (GHS). Potential causes of false positives in certain chemical (e.g. ethers and alcohols) or hazard classes are being further investigated. CONCLUSION: The OptiSafe test method is useful for identifying nonsurfactant substances not requiring classification for ocular irritancy. OptiSafe represents a new tool for the in vitro assessment of ocular toxicity in a tiered-testing strategy where chemicals can be initially tested and identified as not requiring hazard classification.


Subject(s)
Animal Testing Alternatives , Eye/drug effects , Irritants/toxicity , Toxicity Tests, Acute/methods , Hydrogen-Ion Concentration , Irritants/chemistry , Macromolecular Substances/chemistry , Reproducibility of Results , Solubility , Water/chemistry
12.
Biologicals ; 60: 8-14, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31255474

ABSTRACT

This two-day workshop, co-sponsored by NICEATM and IABS-NA, brought together over 60 international scientists from government, academia, and industry to advance alternative methods for human and veterinary Rabies Virus Vaccine (RVV) potency testing. On day one, workshop presentations focused on regulatory perspectives related to in vitro potency testing, including recent additions to the European Pharmacopoeia (5.2.14) that provide a scientific rationale for why in vivo methods may be less suitable for vaccine quality control than appropriately designed in vitro methods. Further presentations reviewed the role of the consistency approach to manufacturing and vaccine batch comparison to provide supportive data for the substitution of existing animal-based methods with in vitro assays. In addition, updates from research programs evaluating and validating RVV glycoprotein (G) quantitation by ELISA as an in vitro potency test were presented. On the second day, RVV stakeholders participated in separate human and veterinary vaccine discussion groups focused on identifying potential obstacles or additional requirements for successful implementation of non-animal alternatives to the in vivo potency test. Workshop outcomes and proposed follow up activities are discussed herein.


Subject(s)
Rabies Vaccines/therapeutic use , Rabies virus/immunology , Rabies/prevention & control , Vaccine Potency , Animals , Biological Science Disciplines , Education , Humans , Quality Control , Rabies/immunology , Rabies/pathology , Rabies Vaccines/immunology , Societies, Scientific
13.
Regul Toxicol Pharmacol ; 102: 30-33, 2019 Mar.
Article in English | MEDLINE | ID: mdl-30578838

ABSTRACT

The acute toxicity 'six-pack' is a suite of tests for hazard identification and risk assessment, primarily conducted for the classification and labelling of industrial chemicals and agrochemicals. The 'six-pack' is designed to provide information on health hazards likely to arise from short-term exposure to chemicals via inhalation, oral and dermal routes, including the potential for eye and skin irritation/corrosion and skin sensitization. The component tests of the 'six-pack' currently rely heavily on the use of experimental animals. In 2017, the UK National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs), together with the European Union Reference Laboratory for Alternatives to Animal Testing (EURL-ECVAM), and the US National Toxicology Program (NTP) Centre for the Evaluation of Alternative Methods (NICEATM) held a workshop entitled 'Towards Global Elimination of the Acute Toxicity 'Six-Pack'' to explore opportunities to use alternative (non-animal) methods for hazard identification and classification without compromising human or environmental safety. The Workshop included scientists from regulatory agencies and industrial organisations worldwide, and sought to gain a more detailed understanding of the barriers to the adoption of suitable animal-free alternatives at an international level. Among the issues addressed were: the recurring theme of validation and scientific credibility, as well as the need for international standards, an understanding of the limitations of each new/alternative method and characterisation against the variability of current animal methods. The practicality and cost of new tests was also an important consideration. However, the need for mutual acceptance, and global harmonization of requirements were thought to be the major hurdle to overcome to realise a vision of the eventual complete elimination of the current, animal test-based, acute toxicity 'six-pack'.


Subject(s)
Animal Testing Alternatives , Toxicity Tests, Acute/methods
14.
Cutan Ocul Toxicol ; 38(2): 141-155, 2019 Jun.
Article in English | MEDLINE | ID: mdl-30418044

ABSTRACT

PURPOSE: Eye and skin irritation test data are required or considered by chemical regulation authorities in the United States to develop product hazard labelling and/or to assess risks for exposure to skin- and eye-irritating chemicals. The combination of animal welfare concerns and interest in implementing methods with greater human relevance has led to the development of non-animal skin- and eye-irritation test methods. To identify opportunities for regulatory uses of non-animal replacements for skin and eye irritation tests, the needs and uses for these types of test data at U.S. regulatory and research agencies must first be clarified. METHODS: We surveyed regulatory and non-regulatory testing needs of U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) agencies for skin and eye irritation testing data. Information reviewed includes the type of skin and eye irritation data required by each agency and the associated decision context: hazard classification, potency classification, or risk assessment; the preferred tests; and whether alternative or non-animal tests are acceptable. Information on the specific information needed from non-animal test methods also was collected. RESULTS: A common theme across U.S. agencies is the willingness to consider non-animal or alternative test methods. Sponsors are encouraged to consult with the relevant agency in designing their testing program to discuss the use and acceptance of alternative methods for local skin and eye irritation testing. CONCLUSIONS: To advance the implementation of alternative testing methods, a dialog on the confidence of these methods to protect public health and the environment must be undertaken at all levels.


Subject(s)
Animal Testing Alternatives/legislation & jurisprudence , Government Regulation , Toxicity Tests , Animals , Eye/drug effects , Government Agencies , Humans , Skin/drug effects , United States
15.
Crit Rev Toxicol ; 48(5): 359-374, 2018 05.
Article in English | MEDLINE | ID: mdl-29474122

ABSTRACT

Skin sensitization is a toxicity endpoint of widespread concern, for which the mechanistic understanding and concurrent necessity for non-animal testing approaches have evolved to a critical juncture, with many available options for predicting sensitization without using animals. Cosmetics Europe and the National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods collaborated to analyze the performance of multiple non-animal data integration approaches for the skin sensitization safety assessment of cosmetics ingredients. The Cosmetics Europe Skin Tolerance Task Force (STTF) collected and generated data on 128 substances in multiple in vitro and in chemico skin sensitization assays selected based on a systematic assessment by the STTF. These assays, together with certain in silico predictions, are key components of various non-animal testing strategies that have been submitted to the Organization for Economic Cooperation and Development as case studies for skin sensitization. Curated murine local lymph node assay (LLNA) and human skin sensitization data were used to evaluate the performance of six defined approaches, comprising eight non-animal testing strategies, for both hazard and potency characterization. Defined approaches examined included consensus methods, artificial neural networks, support vector machine models, Bayesian networks, and decision trees, most of which were reproduced using open source software tools. Multiple non-animal testing strategies incorporating in vitro, in chemico, and in silico inputs demonstrated equivalent or superior performance to the LLNA when compared to both animal and human data for skin sensitization.


Subject(s)
Animal Testing Alternatives/methods , Computational Biology/methods , Computer Simulation , Cosmetics/adverse effects , Dermatitis, Allergic Contact/immunology , Skin/immunology , Animals , Cosmetics/pharmacology , Dermatitis, Allergic Contact/etiology , Humans , Mice , Skin/drug effects
16.
Regul Toxicol Pharmacol ; 94: 183-196, 2018 Apr.
Article in English | MEDLINE | ID: mdl-29408321

ABSTRACT

Acute systemic toxicity data are used by a number of U.S. federal agencies, most commonly for hazard classification and labeling and/or risk assessment for acute chemical exposures. To identify opportunities for the implementation of non-animal approaches to produce these data, the regulatory needs and uses for acute systemic toxicity information must first be clarified. Thus, we reviewed acute systemic toxicity testing requirements for six U.S. agencies (Consumer Product Safety Commission, Department of Defense, Department of Transportation, Environmental Protection Agency, Food and Drug Administration, Occupational Safety and Health Administration) and noted whether there is flexibility in satisfying data needs with methods that replace or reduce animal use. Understanding the current regulatory use and acceptance of non-animal data is a necessary starting point for future method development, optimization, and validation efforts. The current review will inform the development of a national strategy and roadmap for implementing non-animal approaches to assess potential hazards associated with acute exposures to industrial chemicals and medical products. The Acute Toxicity Workgroup of the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), U.S. agencies, non-governmental organizations, and other stakeholders will work to execute this strategy.


Subject(s)
Government Agencies/legislation & jurisprudence , Toxicity Tests, Acute , Animals , Humans , United States
17.
Chem Res Toxicol ; 30(4): 946-964, 2017 04 17.
Article in English | MEDLINE | ID: mdl-27933809

ABSTRACT

Testing thousands of chemicals to identify potential androgen receptor (AR) agonists or antagonists would cost millions of dollars and take decades to complete using current validated methods. High-throughput in vitro screening (HTS) and computational toxicology approaches can more rapidly and inexpensively identify potential androgen-active chemicals. We integrated 11 HTS ToxCast/Tox21 in vitro assays into a computational network model to distinguish true AR pathway activity from technology-specific assay interference. The in vitro HTS assays probed perturbations of the AR pathway at multiple points (receptor binding, coregulator recruitment, gene transcription, and protein production) and multiple cell types. Confirmatory in vitro antagonist assay data and cytotoxicity information were used as additional flags for potential nonspecific activity. Validating such alternative testing strategies requires high-quality reference data. We compiled 158 putative androgen-active and -inactive chemicals from a combination of international test method validation efforts and semiautomated systematic literature reviews. Detailed in vitro assay information and results were compiled into a single database using a standardized ontology. Reference chemical concentrations that activated or inhibited AR pathway activity were identified to establish a range of potencies with reproducible reference chemical results. Comparison with existing Tier 1 AR binding data from the U.S. EPA Endocrine Disruptor Screening Program revealed that the model identified binders at relevant test concentrations (<100 µM) and was more sensitive to antagonist activity. The AR pathway model based on the ToxCast/Tox21 assays had balanced accuracies of 95.2% for agonist (n = 29) and 97.5% for antagonist (n = 28) reference chemicals. Out of 1855 chemicals screened in the AR pathway model, 220 chemicals demonstrated AR agonist or antagonist activity and an additional 174 chemicals were predicted to have potential weak AR pathway activity.


Subject(s)
Androgen Receptor Antagonists/metabolism , Androgens/metabolism , Models, Theoretical , Receptors, Androgen/metabolism , Androgen Receptor Antagonists/chemistry , Androgen Receptor Antagonists/pharmacology , Androgens/chemistry , Androgens/pharmacology , Area Under Curve , High-Throughput Screening Assays , Humans , Protein Binding , ROC Curve , Receptors, Androgen/chemistry , Receptors, Androgen/genetics , Transcriptional Activation/drug effects
18.
J Chem Inf Model ; 57(1): 36-49, 2017 01 23.
Article in English | MEDLINE | ID: mdl-28006899

ABSTRACT

There are little available toxicity data on the vast majority of chemicals in commerce. High-throughput screening (HTS) studies, such as those being carried out by the U.S. Environmental Protection Agency (EPA) ToxCast program in partnership with the federal Tox21 research program, can generate biological data to inform models for predicting potential toxicity. However, physicochemical properties are also needed to model environmental fate and transport, as well as exposure potential. The purpose of the present study was to generate an open-source quantitative structure-property relationship (QSPR) workflow to predict a variety of physicochemical properties that would have cross-platform compatibility to integrate into existing cheminformatics workflows. In this effort, decades-old experimental property data sets available within the EPA EPI Suite were reanalyzed using modern cheminformatics workflows to develop updated QSPR models capable of supplying computationally efficient, open, and transparent HTS property predictions in support of environmental modeling efforts. Models were built using updated EPI Suite data sets for the prediction of six physicochemical properties: octanol-water partition coefficient (logP), water solubility (logS), boiling point (BP), melting point (MP), vapor pressure (logVP), and bioconcentration factor (logBCF). The coefficient of determination (R2) between the estimated values and experimental data for the six predicted properties ranged from 0.826 (MP) to 0.965 (BP), with model performance for five of the six properties exceeding those from the original EPI Suite models. The newly derived models can be employed for rapid estimation of physicochemical properties within an open-source HTS workflow to inform fate and toxicity prediction models of environmental chemicals.


Subject(s)
Chemical Phenomena , Computer Simulation , Environmental Pollutants/chemistry , Machine Learning , Environmental Pollutants/toxicity , Informatics , Quantitative Structure-Activity Relationship , Solubility , Transition Temperature , Vapor Pressure , Water/chemistry
19.
J Appl Toxicol ; 37(3): 347-360, 2017 Mar.
Article in English | MEDLINE | ID: mdl-27480324

ABSTRACT

One of the Interagency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays - the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens™ assay - six physicochemical properties and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression and support vector machine, to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three logistic regression and three support vector machine) with the highest accuracy (92%) used: (1) DPRA, h-CLAT and read-across; (2) DPRA, h-CLAT, read-across and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy 88%), any of the alternative methods alone (accuracy 63-79%) or test batteries combining data from the individual methods (accuracy 75%). These results suggest that computational methods are promising tools to identify effectively the potential human skin sensitizers without animal testing. Published 2016. This article has been contributed to by US Government employees and their work is in the public domain in the USA.


Subject(s)
Dermatitis, Allergic Contact/etiology , Hazardous Substances/toxicity , Models, Biological , Skin/drug effects , Animal Use Alternatives , Biological Assay , Databases, Factual , Dermatitis, Allergic Contact/immunology , Humans , Logistic Models , Machine Learning , Multivariate Analysis , Predictive Value of Tests
20.
J Appl Toxicol ; 37(7): 792-805, 2017 07.
Article in English | MEDLINE | ID: mdl-28074598

ABSTRACT

The replacement of animal use in testing for regulatory classification of skin sensitizers is a priority for US federal agencies that use data from such testing. Machine learning models that classify substances as sensitizers or non-sensitizers without using animal data have been developed and evaluated. Because some regulatory agencies require that sensitizers be further classified into potency categories, we developed statistical models to predict skin sensitization potency for murine local lymph node assay (LLNA) and human outcomes. Input variables for our models included six physicochemical properties and data from three non-animal test methods: direct peptide reactivity assay; human cell line activation test; and KeratinoSens™ assay. Models were built to predict three potency categories using four machine learning approaches and were validated using external test sets and leave-one-out cross-validation. A one-tiered strategy modeled all three categories of response together while a two-tiered strategy modeled sensitizer/non-sensitizer responses and then classified the sensitizers as strong or weak sensitizers. The two-tiered model using the support vector machine with all assay and physicochemical data inputs provided the best performance, yielding accuracy of 88% for prediction of LLNA outcomes (120 substances) and 81% for prediction of human test outcomes (87 substances). The best one-tiered model predicted LLNA outcomes with 78% accuracy and human outcomes with 75% accuracy. By comparison, the LLNA predicts human potency categories with 69% accuracy (60 of 87 substances correctly categorized). These results suggest that computational models using non-animal methods may provide valuable information for assessing skin sensitization potency. Copyright © 2017 John Wiley & Sons, Ltd.


Subject(s)
Animal Testing Alternatives/methods , Biological Assay/methods , Dermatitis, Allergic Contact/etiology , Dermatitis, Allergic Contact/immunology , Hazardous Substances/toxicity , Machine Learning , Skin/drug effects , Humans , Models, Statistical , United States
SELECTION OF CITATIONS
SEARCH DETAIL