ABSTRACT
Access to computationally based visualization tools to navigate chemical space has become more important due to the increasing size and diversity of publicly accessible databases, associated compendiums of high-throughput screening (HTS) results, and other descriptor and effects data. However, application of these techniques requires advanced programming skills that are beyond the capabilities of many stakeholders. Here we report the development of the second version of the ChemMaps.com webserver (https://sandbox.ntp.niehs.nih.gov/chemmaps/) focused on environmental chemical space. The chemical space of ChemMaps.com v2.0, released in 2022, now includes approximately one million environmental chemicals from the EPA Distributed Structure-Searchable Toxicity (DSSTox) inventory. ChemMaps.com v2.0 incorporates mapping of HTS assay data from the U.S. federal Tox21 research collaboration program, which includes results from around 2000 assays tested on up to 10 000 chemicals. As a case example, we showcased chemical space navigation for Perfluorooctanoic Acid (PFOA), part of the Per- and polyfluoroalkyl substances (PFAS) chemical family, which are of significant concern for their potential effects on human health and the environment.
Subject(s)
Databases, Chemical , High-Throughput Screening Assays , Software , EnvironmentABSTRACT
BACKGROUND: Chemically induced skin sensitization, or allergic contact dermatitis, is a common occupational and public health issue. Regulatory authorities require an assessment of potential to cause skin sensitization for many chemical products. Defined approaches for skin sensitization (DASS) identify potential chemical skin sensitizers by integrating data from multiple non-animal tests based on human cells, molecular targets, and computational model predictions using standardized data interpretation procedures. While several DASS are internationally accepted by regulatory agencies, the data interpretation procedures vary in logical complexity, and manual application can be time-consuming or prone to error. RESULTS: We developed the DASS App, an open-source web application, to facilitate user application of three regulatory testing strategies for skin sensitization assessment: the Two-out-of-Three (2o3), the Integrated Testing Strategy (ITS), and the Key Event 3/1 Sequential Testing Strategy (KE 3/1 STS) without the need for software downloads or computational expertise. The application supports upload and analysis of user-provided data, includes steps to identify inconsistencies and formatting issues, and provides predictions in a downloadable format. CONCLUSION: This open-access web-based implementation of internationally harmonized regulatory guidelines for an important public health endpoint is designed to support broad user uptake and consistent, reproducible application. The DASS App is freely accessible via https://ntp.niehs.nih.gov/go/952311 and all scripts are available on GitHub ( https://github.com/NIEHS/DASS ).
Subject(s)
Dermatitis, Allergic Contact , Mobile Applications , Animals , Humans , Animal Testing Alternatives/methods , Skin , Dermatitis, Allergic Contact/etiologyABSTRACT
The rapid progress of AI impacts diverse scientific disciplines, including toxicology, and has the potential to transform chemical safety evaluation. Toxicology has evolved from an empirical science focused on observing apical outcomes of chemical exposure, to a data-rich field ripe for AI integration. The volume, variety and velocity of toxicological data from legacy studies, literature, high-throughput assays, sensor technologies and omics approaches create opportunities but also complexities that AI can help address. In particular, machine learning is well suited to handle and integrate large, heterogeneous datasets that are both structured and unstructured-a key challenge in modern toxicology. AI methods like deep neural networks, large language models, and natural language processing have successfully predicted toxicity endpoints, analyzed high-throughput data, extracted facts from literature, and generated synthetic data. Beyond automating data capture, analysis, and prediction, AI techniques show promise for accelerating quantitative risk assessment by providing probabilistic outputs to capture uncertainties. AI also enables explanation methods to unravel mechanisms and increase trust in modeled predictions. However, issues like model interpretability, data biases, and transparency currently limit regulatory endorsement of AI. Multidisciplinary collaboration is needed to ensure development of interpretable, robust, and human-centered AI systems. Rather than just automating human tasks at scale, transformative AI can catalyze innovation in how evidence is gathered, data are generated, hypotheses are formed and tested, and tasks are performed to usher new paradigms in chemical safety assessment. Used judiciously, AI has immense potential to advance toxicology into a more predictive, mechanism-based, and evidence-integrated scientific discipline to better safeguard human and environmental wellbeing across diverse populations.
Subject(s)
Artificial Intelligence , Chemical Safety , Humans , Neural Networks, Computer , Machine Learning , CatalysisABSTRACT
Since the 1940s, patch tests in healthy volunteers (Human Predictive Patch Tests, HPPTs) have been used to identify chemicals that cause skin sensitization in humans. Recently, we reported the results of a major curation effort to support the development of OECD Guideline 497 on Defined Approaches (DAs) for skin sensitization (OECD in Guideline No. 497: Defined Approaches on Skin Sensitisation, 2021a. https://doi.org/10.1787/b92879a4-en ). In the course of this work, we compiled and published a database of 2277 HPPT results for 1366 unique test substances (Strickland et al. in Arch Toxicol 97:2825-2837, 2023. https://doi.org/10.1007/s00204-023-03530-3 ). Here we report a detailed analysis of the value of HPPT data for classification of chemicals as skin sensitizers under the United Nations' Globally Harmonized System of Classification and Labelling of Chemicals (GHS). As a result, we propose the dose per skin area (DSA) used for classification by the GHS to be replaced by or complemented with a dose descriptor that may better reflect sensitization incidence [e.g., the DSA causing induction of sensitization in one individual (DSA1+) or the DSA leading to an incidence of induction in 5% of the tested individuals (DSA05)]. We also propose standardized concepts and workflows for assessing individual HPPT results, for integrating multiple HPPT results and for using them in concert with Local Lymph Node Assay (LLNA) data in a weight of evidence (WoE) assessment. Overall, our findings show that HPPT results are often not sufficient for deriving unambiguous classifications on their own. However, where they are, the resulting classifications are reliable and reproducible and can be integrated well with those from other skin sensitization data, such as the LLNA.
Subject(s)
Dermatitis, Allergic Contact , Humans , Patch Tests , Dermatitis, Allergic Contact/etiology , Allergens/toxicity , Skin , Local Lymph Node AssayABSTRACT
In toxicology and regulatory testing, the use of animal methods has been both a cornerstone and a subject of intense debate. To continue this discourse a panel and audience representing scientists from various sectors and countries convened at a workshop held during the 12th World Congress on Alternatives and Animal Use in the Life Sciences (WC-12). The ensuing discussion focused on the scientific and ethical considerations surrounding the necessity and responsibility of defending the creation of new animal data in regulatory testing. The primary aim was to foster an open dialogue between the panel members and the audience while encouraging diverse perspectives on the responsibilities and obligations of various stakeholders (including industry, regulatory bodies, technology developers, research scientists, and animal welfare NGOs) in defending the development and subsequent utilization of new animal data. This workshop summary report captures the key elements from this critical dialogue and collective introspection. It describes the intersection of scientific progress and ethical responsibility as all sectors seek to accelerate the pace of 21st century predictive toxicology and new approach methodologies (NAMs) for the protection of human health and the environment.
Subject(s)
Animal Welfare , Research Report , Animals , Humans , Industry , Risk Assessment , Animal Testing Alternatives/methodsABSTRACT
Inhalation is a critical route through which substances can exert adverse effects in humans; therefore, it is important to characterize the potential effects that inhaled substances may have on the human respiratory tract by using fit for purpose, reliable, and human relevant testing tools. In regulatory toxicology testing, rats have primarily been used to assess the effects of inhaled substances as they-being mammals-share similarities in structure and function of the respiratory tract with humans. However, questions about inter-species differences impacting the predictability of human effects have surfaced. Disparities in macroscopic anatomy, microscopic anatomy, or physiology, such as breathing mode (e.g., nose-only versus oronasal breathing), airway structure (e.g., complexity of the nasal turbinates), cell types and location within the respiratory tract, and local metabolism may impact inhalation toxicity testing results. This review shows that these key differences describe uncertainty in the use of rat data to predict human effects and supports an opportunity to harness modern toxicology tools and a detailed understanding of the human respiratory tract to develop testing approaches grounded in human biology. Ultimately, as the regulatory purpose is protecting human health, there is a need for testing approaches based on human biology and mechanisms of toxicity.
Subject(s)
Respiratory System , Species Specificity , Toxicity Tests , Animals , Humans , Respiratory System/drug effects , Respiratory System/anatomy & histology , Rats , Toxicity Tests/methods , Inhalation Exposure/adverse effects , Risk AssessmentABSTRACT
The United States Environmental Protection Agency (USEPA) uses the lethal dose 50% (LD50) value from in vivo rat acute oral toxicity studies for pesticide product label precautionary statements and environmental risk assessment (RA). The Collaborative Acute Toxicity Modeling Suite (CATMoS) is a quantitative structure-activity relationship (QSAR)-based in silico approach to predict rat acute oral toxicity that has the potential to reduce animal use when registering a new pesticide technical grade active ingredient (TGAI). This analysis compared LD50 values predicted by CATMoS to empirical values from in vivo studies for the TGAIs of 177 conventional pesticides. The accuracy and reliability of the model predictions were assessed relative to the empirical data in terms of USEPA acute oral toxicity categories and discrete LD50 values for each chemical. CATMoS was most reliable at placing pesticide TGAIs in acute toxicity categories III (>500-5000 mg/kg) and IV (>5000 mg/kg), with 88% categorical concordance for 165 chemicals with empirical in vivo LD50 values ≥ 500 mg/kg. When considering an LD50 for RA, CATMoS predictions of 2000 mg/kg and higher were found to agree with empirical values from limit tests (i.e., single, high-dose tests) or definitive results over 2000 mg/kg with few exceptions.
Subject(s)
Computer Simulation , Pesticides , Quantitative Structure-Activity Relationship , Toxicity Tests, Acute , United States Environmental Protection Agency , Animals , Risk Assessment , Pesticides/toxicity , Lethal Dose 50 , Rats , Administration, Oral , Toxicity Tests, Acute/methods , United States , Reproducibility of ResultsABSTRACT
Many sectors have seen complete replacement of the in vivo rabbit eye test with reproducible and relevant in vitro and ex vivo methods to assess the eye corrosion/irritation potential of chemicals. However, the in vivo rabbit eye test remains the standard test used for agrochemical formulations in some countries. Therefore, two defined approaches (DAs) for assessing conventional agrochemical formulations were developed, using the EpiOcularTM Eye Irritation Test (EIT) [Organisation for Economic Co-operation and Development (OECD) test guideline (TG) 492] and the Bovine Corneal Opacity and Permeability (OECD TG 437; BCOP) test with histopathology. Presented here are the results from testing 29 agrochemical formulations, which were evaluated against the United States Environmental Protection Agency's (EPA) pesticide classification system, and assessed using orthogonal validation, rather than direct concordance analysis with the historical in vivo rabbit eye data. Scientific confidence was established by evaluating the methods and testing results using an established framework that considers fitness for purpose, human biological relevance, technical characterisation, data integrity and transparency, and independent review. The in vitro and ex vivo methods used in the DAs were demonstrated to be as or more fit for purpose, reliable and relevant than the in vivo rabbit eye test. Overall, there is high scientific confidence in the use of these DAs for assessing the eye corrosion/irritation potential of agrochemical formulations.
Subject(s)
Corneal Opacity , Epithelium, Corneal , Humans , Animals , Cattle , Rabbits , Eye , Epithelium, Corneal/pathology , Agrochemicals/toxicity , Irritants/toxicity , Corneal Opacity/chemically induced , Corneal Opacity/pathology , Permeability , Animal Testing AlternativesABSTRACT
The National Center for Advancing Translational Sciences (NCATS) Assay Guidance Manual (AGM) Workshop on 3D Tissue Models for Antiviral Drug Development, held virtually on 7-8 June 2022, provided comprehensive coverage of critical concepts intended to help scientists establish robust, reproducible, and scalable 3D tissue models to study viruses with pandemic potential. This workshop was organized by NCATS, the National Institute of Allergy and Infectious Diseases, and the Bill and Melinda Gates Foundation. During the workshop, scientific experts from academia, industry, and government provided an overview of 3D tissue models' utility and limitations, use of existing 3D tissue models for antiviral drug development, practical advice, best practices, and case studies about the application of available 3D tissue models to infectious disease modeling. This report includes a summary of each workshop session as well as a discussion of perspectives and challenges related to the use of 3D tissues in antiviral drug discovery.
Subject(s)
Antiviral Agents , Drug Discovery , Antiviral Agents/pharmacology , Antiviral Agents/therapeutic use , Biological AssayABSTRACT
Chemical regulatory authorities around the world require systemic toxicity data from acute exposures via the oral, dermal, and inhalation routes for human health risk assessment. To identify opportunities for regulatory uses of non-animal replacements for these tests, we reviewed acute systemic toxicity testing requirements for jurisdictions that participate in the International Cooperation on Alternative Test Methods (ICATM): Brazil, Canada, China, the European Union, Japan, South Korea, Taiwan, and the USA. The chemical sectors included in our review of each jurisdiction were cosmetics, consumer products, industrial chemicals, pharmaceuticals, medical devices, and pesticides. We found acute systemic toxicity data were most often required for hazard assessment, classification, and labeling, and to a lesser extent quantitative risk assessment. Where animal methods were required, animal reduction methods were typically recommended. For many jurisdictions and chemical sectors, non-animal alternatives are not accepted, but several jurisdictions provide guidance to support the use of test waivers to reduce animal use for specific applications. An understanding of international regulatory requirements for acute systemic toxicity testing will inform ICATM's strategy for the development, acceptance, and implementation of non-animal alternatives to assess the health hazards and risks associated with acute toxicity.
ABSTRACT
Critical to the evaluation of non-animal tests are reference data with which to assess their relevance. Animal data are typically used because they are generally standardized and available. However, when regulatory agencies aim to protect human health, human reference data provide the benefit of not having to account for possible interspecies variability. To support the evaluation of non-animal approaches for skin sensitization assessment, we collected data from 2277 human predictive patch tests (HPPTs), i.e., human repeat insult patch tests and human maximization tests, for skin sensitization from 1555 publications. We recorded protocol elements and positive or negative outcomes, developed a scoring system to evaluate each test for reliability, and calculated traditional and non-traditional dose metrics. We also traced each test result back to its original report to remove duplicates. The resulting database, which contains information for 1366 unique substances, was characterized for physicochemical properties, chemical structure categories, and protein binding mechanisms. This database is publicly available on the National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods website and in the Integrated Chemical Environment to serve as a resource for additional evaluation of alternative methods and development of new approach methodologies for skin sensitization assessments.
Subject(s)
Benchmarking , Skin , Humans , Patch Tests , Reproducibility of Results , Databases, FactualABSTRACT
Meaningful and accurate reference data are crucial for the validation of New Approach Methodologies (NAMs) in toxicology. For skin sensitization, multiple reference datasets are available including human patch test data, guinea pig data and data from the mouse local lymph node assay (LLNA). When assessed against the LLNA, a reduced sensitivity has been reported for in vitro and in chemico assays for lipophilic chemicals with a LogP ≥3.5, resulting in reliability restrictions within the h-CLAT OECD test guideline. Here we address the question of whether LLNA data are an appropriate reference for chemicals in this physicochemical range. Analysis of LLNA vs human reference data indicates that the false-discovery rate of the LLNA is significantly higher for chemicals with LogP ≥3.5. We present a mechanistic hypothesis whereby irritation caused by testing lipophilic chemicals at high test doses leads to unspecific cell proliferation. The accompanying analysis indicates that for lipophilic chemicals with negative calls in in vitro and in chemico assays, resorting to the LLNA is not necessarily a better option. These results indicate that the validation of NAMs in this particular LogP range should be based on a more holistic evaluation of the reference data and not solely upon LLNA data.
Subject(s)
Dermatitis, Allergic Contact , Local Lymph Node Assay , Animals , Mice , Humans , Guinea Pigs , Dermatitis, Allergic Contact/etiology , Dermatitis, Allergic Contact/pathology , Reproducibility of Results , Skin , Patch Tests , Allergens/toxicity , Lymph Nodes/pathologyABSTRACT
The U.S. Environmental Protection Agency (USEPA) uses the in vivo fish acute toxicity test to assess potential risk of substances to non-target aquatic vertebrates. The test is typically conducted on a cold and a warm freshwater species and a saltwater species for a conventional pesticide registration, potentially requiring upwards of 200 or more fish. A retrospective data evaluation was conducted to explore the potential for using fewer fish species to support conventional pesticide risk assessments. Lethal concentration 50% (LC50) values and experimental details were extracted and curated from 718 studies on fish acute toxicity submitted to USEPA. The LC50 data were analysed to determine, when possible, the relative sensitivity of the tested species to each pesticide. One of the tested freshwater species was most sensitive in 85% of those cases. The tested cold freshwater species was the most sensitive overall among cases with established relative sensitivity and was within 3X of the LC50 value of the most sensitive species tested in 98% of those cases. The results support potentially using fewer than three fish species to conduct ecological risk assessments for the registration of conventional pesticides.
Subject(s)
Pesticides , Water Pollutants, Chemical , Animals , Pesticides/toxicity , Retrospective Studies , Fishes , Toxicity Tests, Acute/methods , Lethal Dose 50 , Water Pollutants, Chemical/toxicity , Risk AssessmentABSTRACT
Like many other consumer and occupational products, pesticide formulations may contain active ingredients or co-formulants which have the potential to cause skin sensitisation. Currently, there is little evidence they do, but that could just reflect lack of clinical investigation. Consequently, it is necessary to carry out a safety evaluation process, quantifying risks so that they can be properly managed. A workshop on this topic in 2022 discussed how best to undertake quantitative risk assessment (QRA) for pesticide products, including learning from the experience of industries, notably cosmetics, that already undertake such a process routinely. It also addressed ways to remedy the matter of clinical investigation, even if only to demonstrate the absence of a problem. Workshop participants concluded that QRA for skin sensitisers in pesticide formulations was possible, but required careful justification of any safety factors applied, as well as improvements to the estimation of skin exposure. The need for regulations to stay abreast of the science was also noted. Ultimately, the success of any risk assessment/management for skin sensitisers must be judged by the clinical picture. Accordingly, the workshop participants encouraged the development of more active skin health monitoring amongst groups most exposed to the products.
Subject(s)
Cosmetics , Dermatitis, Allergic Contact , Pesticides , Humans , Dermatitis, Allergic Contact/etiology , Pesticides/toxicity , Skin , Risk Assessment , Cosmetics/toxicityABSTRACT
Computational modeling grounded in reliable experimental data can help design effective non-animal approaches to predict the eye irritation and corrosion potential of chemicals. The National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM) has compiled and curated a database of in vivo eye irritation studies from the scientific literature and from stakeholder-provided data. The database contains 810 annotated records of 593 unique substances, including mixtures, categorized according to UN GHS and US EPA hazard classifications. This study reports a set of in silico models to predict EPA and GHS hazard classifications for chemicals and mixtures, accounting for purity by setting thresholds of 100% and 10% concentration. We used two approaches to predict classification of mixtures: conventional and mixture-based. Conventional models evaluated substances based on the chemical structure of its major component. These models achieved balanced accuracy in the range of 68-80% and 87-96% for the 100% and 10% test concentration thresholds, respectively. Mixture-based models, which accounted for all known components in the substance by weighted feature averaging, showed similar or slightly higher accuracy of 72-79% and 89-94% for the respective thresholds. We also noted a strong trend between the pH feature metric calculated for each substance and its activity. Across all the models, the calculated pH of inactive substances was within one log10 unit of neutral pH, on average, while for active substances, pH varied from neutral by at least 2 log10 units. This pH dependency is especially important for complex mixtures. Additional evaluation on an external test set of 673 substances obtained from ECHA dossiers achieved balanced accuracies of 64-71%, which suggests that these models can be useful in screening compounds for ocular irritation potential. Negative predictive value was particularly high and indicates the potential application of these models in a bottom-up approach to identify nonirritant substances.
Subject(s)
Irritants , Toxic Optic Neuropathy , Animal Testing Alternatives , Animals , Computer Simulation , Eye , Humans , Irritants/toxicity , United States , United States Environmental Protection AgencyABSTRACT
Chemical-induced alteration of maternal thyroid hormone levels may increase the risk of adverse neurodevelopmental outcomes in offspring. US federal risk assessments rely almost exclusively on apical endpoints in animal models for deriving points of departure (PODs). New approach methodologies (NAMs) such as high-throughput screening (HTS) and mechanistically informative in vitro human cell-based systems, combined with in vitro to in vivo extrapolation (IVIVE), supplement in vivo studies and provide an alternative approach to calculate/determine PODs. We examine how parameterization of IVIVE models impacts the comparison between IVIVE-derived equivalent administered doses (EADs) from thyroid-relevant in vitro assays and the POD values that serve as the basis for risk assessments. Pesticide chemicals with thyroid-based in vitro bioactivity data from the US Tox21 HTS program were included (n = 45). Depending on the model structure used for IVIVE analysis, up to 35 chemicals produced EAD values lower than the POD. A total of 10 chemicals produced EAD values higher than the POD regardless of the model structure. The relationship between IVIVE-derived EAD values and the in vivo-derived POD values is highly dependent on model parameterization. Here, we derive a range of potentially thyroid-relevant doses that incorporate uncertainty in modeling choices and in vitro assay data.
Subject(s)
Pesticides , Animals , High-Throughput Screening Assays/methods , Pesticides/toxicity , Risk Assessment/methods , Thyroid Gland , UncertaintyABSTRACT
Robust and efficient processes are needed to establish scientific confidence in new approach methodologies (NAMs) if they are to be considered for regulatory applications. NAMs need to be fit for purpose, reliable and, for the assessment of human health effects, provide information relevant to human biology. They must also be independently reviewed and transparently communicated. Ideally, NAM developers should communicate with stakeholders such as regulators and industry to identify the question(s), and specified purpose that the NAM is intended to address, and the context in which it will be used. Assessment of the biological relevance of the NAM should focus on its alignment with human biology, mechanistic understanding, and ability to provide information that leads to health protective decisions, rather than solely comparing NAM-based chemical testing results with those from traditional animal test methods. However, when NAM results are compared to historical animal test results, the variability observed within animal test method results should be used to inform performance benchmarks. Building on previous efforts, this paper proposes a framework comprising five essential elements to establish scientific confidence in NAMs for regulatory use: fitness for purpose, human biological relevance, technical characterization, data integrity and transparency, and independent review. Universal uptake of this framework would facilitate the timely development and use of NAMs by the international community. While this paper focuses on NAMs for assessing human health effects of pesticides and industrial chemicals, many of the suggested elements are expected to apply to other types of chemicals and to ecotoxicological effect assessments.
Subject(s)
Ecotoxicology , Pesticides , Animals , Humans , Research Design , Risk AssessmentABSTRACT
To support rapid chemical toxicity assessment and mechanistic hypothesis generation, here we present an intuitive webtool allowing a user to identify target organs in the human body where a substance is estimated to be more likely to produce effects. This tool, called Tox21BodyMap, incorporates results of 9,270 chemicals tested in the United States federal Tox21 research consortium in 971 high-throughput screening (HTS) assays whose targets were mapped onto human organs using organ-specific gene expression data. Via Tox21BodyMap's interactive tools, users can visualize chemical target specificity by organ system, and implement different filtering criteria by changing gene expression thresholds and activity concentration parameters. Dynamic network representations, data tables, and plots with comprehensive activity summaries across all Tox21 HTS assay targets provide an overall picture of chemical bioactivity. Tox21BodyMap webserver is available at https://sandbox.ntp.niehs.nih.gov/bodymap/.
Subject(s)
Software , Toxicity Tests/methods , Gene Expression/drug effects , High-Throughput Screening Assays , Humans , Internet , Organ SpecificityABSTRACT
High-throughput screening (HTS) research programs for drug development or chemical hazard assessment are designed to screen thousands of molecules across hundreds of biological targets or pathways. Most HTS platforms use fluorescence and luminescence technologies, representing more than 70% of the assays in the US Tox21 research consortium. These technologies are subject to interferent signals largely explained by chemicals interacting with light spectrum. This phenomenon results in up to 5-10% of false positive results, depending on the chemical library used. Here, we present the InterPred webserver (version 1.0), a platform to predict such interference chemicals based on the first large-scale chemical screening effort to directly characterize chemical-assay interference, using assays in the Tox21 portfolio specifically designed to measure autofluorescence and luciferase inhibition. InterPred combines 17 quantitative structure activity relationship (QSAR) models built using optimized machine learning techniques and allows users to predict the probability that a new chemical will interfere with different combinations of cellular and technology conditions. InterPred models have been applied to the entire Distributed Structure-Searchable Toxicity (DSSTox) Database (â¼800,000 chemicals). The InterPred webserver is available at https://sandbox.ntp.niehs.nih.gov/interferences/.
Subject(s)
High-Throughput Screening Assays , Software , Artifacts , Fluorescence , Internet , Machine Learning , Pharmaceutical Preparations/chemistry , Quantitative Structure-Activity Relationship , WorkflowABSTRACT
U.S. regulatory and research agencies use ecotoxicity test data to assess the hazards associated with substances that may be released into the environment, including but not limited to industrial chemicals, pharmaceuticals, pesticides, food additives, and color additives. These data are used to conduct hazard assessments and evaluate potential risks to aquatic life (e.g., invertebrates, fish), birds, wildlife species, or the environment. To identify opportunities for regulatory uses of non-animal replacements for ecotoxicity tests, the needs and uses for data from tests utilizing animals must first be clarified. Accordingly, the objective of this review was to identify the ecotoxicity test data relied upon by U.S. federal agencies. The standards, test guidelines, guidance documents, and/or endpoints that are used to address each of the agencies' regulatory and research needs regarding ecotoxicity testing are described in the context of their application to decision-making. Testing and information use, needs, and/or requirements relevant to the regulatory or programmatic mandates of the agencies taking part in the Interagency Coordinating Committee on the Validation of Alternative Methods Ecotoxicology Workgroup are captured. This information will be useful for coordinating efforts to develop and implement alternative test methods to reduce, refine, or replace animal use in chemical safety evaluations.