RESUMO
The scientific and ethical issues associated with the use of animal-derived antibodies in research can be overcome by the use of animal-free, sequence-defined recombinant antibodies, whose benefits are well documented. Here, we describe progress made following a 2019 expert meeting focused on improving the quality and reproducibility of biomedical research by accelerating the production and use of animal-free recombinant antibodies in the USA. In the five intervening years since the meeting, participants have established multifaceted initiatives to tackle the next steps outlined during the meeting. These initiatives include: prioritising the replacement of ascites-derived and polyclonal antibodies; distributing educational materials describing recombinant antibodies; fostering public-private partnerships to increase access to recombinant antibodies; and increasing the availability of funding for recombinant antibody development. Given the widescale use of antibodies across scientific disciplines, a transition to modern antibody production methods relies on a commitment from government agencies, universities, industry and funding organisations, to initiatives such as those outlined here.
Assuntos
Anticorpos , Estados Unidos , Animais , Anticorpos/imunologia , Alternativas aos Testes com Animais , Humanos , Proteínas Recombinantes/imunologiaRESUMO
Toxicants with the potential to bioaccumulate in humans and animals have long been a cause for concern, particularly due to their association with multiple diseases and organ injuries. Per- and polyfluoro alkyl substances (PFAS) and polycyclic aromatic hydrocarbons (PAH) are two such classes of chemicals that bioaccumulate and have been associated with steatosis in the liver. Although PFAS and PAH are classified as chemicals of concern, their molecular mechanisms of toxicity remain to be explored in detail. In this study, we aimed to identify potential mechanisms by which an acute exposure to PFAS and PAH chemicals can induce lipid accumulation and whether the responses depend on chemical class, dose, and sex. To this end, we analyzed mechanisms beginning with the binding of the chemical to a molecular initiating event (MIE) and the consequent transcriptomic alterations. We collated potential MIEs using predictions from our previously developed ToxProfiler tool and from published steatosis adverse outcome pathways. Most of the MIEs are transcription factors, and we collected their target genes by mining the TRRUST database. To analyze the effects of PFAS and PAH on the steatosis mechanisms, we performed a computational MIE-target gene analysis on high-throughput transcriptomic measurements of liver tissue from male and female rats exposed to either a PFAS or PAH. The results showed peroxisome proliferator-activated receptor (PPAR)-α targets to be the most dysregulated, with most of the genes being upregulated. Furthermore, PFAS exposure disrupted several lipid metabolism genes, including upregulation of fatty acid oxidation genes (Acadm, Acox1, Cpt2, Cyp4a1-3) and downregulation of lipid transport genes (Apoa1, Apoa5, Pltp). We also identified multiple genes with sex-specific behavior. Notably, the rate-limiting genes of gluconeogenesis (Pck1) and bile acid synthesis (Cyp7a1) were specifically downregulated in male rats compared to female rats, while the rate-limiting gene of lipid synthesis (Scd) showed a PFAS-specific upregulation. The results suggest that the PPAR signaling pathway plays a major role in PFAS-induced lipid accumulation in rats. Together, these results show that PFAS exposure induces a sex-specific multi-factorial mechanism involving rate-limiting genes of gluconeogenesis and bile acid synthesis that could lead to activation of an adverse outcome pathway for steatosis.
RESUMO
Traditionally, chemical toxicity is determined by in vivo animal studies, which are low throughput, expensive, and sometimes fail to predict compound toxicity in humans. Due to the increasing number of chemicals in use and the high rate of drug candidate failure due to toxicity, it is imperative to develop in vitro, high-throughput screening methods to determine toxicity. The Tox21 program, a unique research consortium of federal public health agencies, was established to address and identify toxicity concerns in a high-throughput, concentration-responsive manner using a battery of in vitro assays. In this article, we review the advancements in high-throughput robotic screening methodology and informatics processes to enable the generation of toxicological data, and their impact on the field; further, we discuss the future of assessing environmental toxicity utilizing efficient and scalable methods that better represent the corresponding biological and toxicodynamic processes in humans.
Assuntos
Ensaios de Triagem em Larga Escala , Toxicologia , Animais , Humanos , Ensaios de Triagem em Larga Escala/métodos , Toxicologia/métodosRESUMO
Chemical regulatory authorities around the world require systemic toxicity data from acute exposures via the oral, dermal, and inhalation routes for human health risk assessment. To identify opportunities for regulatory uses of non-animal replacements for these tests, we reviewed acute systemic toxicity testing requirements for jurisdictions that participate in the International Cooperation on Alternative Test Methods (ICATM): Brazil, Canada, China, the European Union, Japan, South Korea, Taiwan, and the USA. The chemical sectors included in our review of each jurisdiction were cosmetics, consumer products, industrial chemicals, pharmaceuticals, medical devices, and pesticides. We found acute systemic toxicity data were most often required for hazard assessment, classification, and labeling, and to a lesser extent quantitative risk assessment. Where animal methods were required, animal reduction methods were typically recommended. For many jurisdictions and chemical sectors, non-animal alternatives are not accepted, but several jurisdictions provide guidance to support the use of test waivers to reduce animal use for specific applications. An understanding of international regulatory requirements for acute systemic toxicity testing will inform ICATM's strategy for the development, acceptance, and implementation of non-animal alternatives to assess the health hazards and risks associated with acute toxicity.
RESUMO
Progress in developing new tools, assays, and approaches to assess human hazard and health risk provides an opportunity to re-evaluate the necessity of dog studies for the safety evaluation of agrochemicals. A workshop was held where participants discussed the strengths and limitations of past use of dogs for pesticide evaluations and registrations. Opportunities were identified to support alternative approaches to answer human safety questions without performing the required 90-day dog study. Development of a decision tree for determining when the dog study might not be necessary to inform pesticide safety and risk assessment was proposed. Such a process will require global regulatory authority participation to lead to its acceptance. The identification of unique effects in dogs that are not identified in rodents will need further evaluation and determination of their relevance to humans. The establishment of in vitro and in silico approaches that can provide critical data on relative species sensitivity and human relevance will be an important tool to advance the decision process. Promising novel tools including in vitro comparative metabolism studies, in silico models, and high-throughput assays able to identify metabolites and mechanisms of action leading to development of adverse outcome pathways will need further development. To replace or eliminate the 90-day dog study, a collaborative, multidisciplinary, international effort that transcends organizations and regulatory agencies will be needed in order to develop guidance on when the study would not be necessary for human safety and risk assessment.
Assuntos
Rotas de Resultados Adversos , Praguicidas , Animais , Cães , Humanos , Agroquímicos/toxicidade , Praguicidas/toxicidade , Medição de Risco , Simulação por ComputadorRESUMO
Robust and efficient processes are needed to establish scientific confidence in new approach methodologies (NAMs) if they are to be considered for regulatory applications. NAMs need to be fit for purpose, reliable and, for the assessment of human health effects, provide information relevant to human biology. They must also be independently reviewed and transparently communicated. Ideally, NAM developers should communicate with stakeholders such as regulators and industry to identify the question(s), and specified purpose that the NAM is intended to address, and the context in which it will be used. Assessment of the biological relevance of the NAM should focus on its alignment with human biology, mechanistic understanding, and ability to provide information that leads to health protective decisions, rather than solely comparing NAM-based chemical testing results with those from traditional animal test methods. However, when NAM results are compared to historical animal test results, the variability observed within animal test method results should be used to inform performance benchmarks. Building on previous efforts, this paper proposes a framework comprising five essential elements to establish scientific confidence in NAMs for regulatory use: fitness for purpose, human biological relevance, technical characterization, data integrity and transparency, and independent review. Universal uptake of this framework would facilitate the timely development and use of NAMs by the international community. While this paper focuses on NAMs for assessing human health effects of pesticides and industrial chemicals, many of the suggested elements are expected to apply to other types of chemicals and to ecotoxicological effect assessments.
Assuntos
Ecotoxicologia , Praguicidas , Animais , Humanos , Projetos de Pesquisa , Medição de RiscoRESUMO
The U.S. Department of Agriculture (USDA) regulates the potency testing of leptospirosis vaccines, which are administered to animals to protect against infection by Leptospira bacteria. Despite the long-term availability of in vitro test methods for assessing batch potency, the use of hamsters in lethal in vivo batch potency testing persists to varying degrees across leptospirosis vaccine manufacturers. For all manufacturers of these products, data collected from public USDA records show an estimated 40% decline in the annual use of hamsters from 2014 to 2020, with an estimated 55% decrease in the number of hamsters expected to have been used in leptospirosis vaccine potency tests (i.e., those in USDA Category E). An estimated 49,000 hamsters were used in 2020, with about 15,000 hamsters in Category E specifically. Based on this assessment, additional efforts are needed to fully implement in vitro batch potency testing as a replacement for the in vivo batch potency test. We propose steps that can be taken collaboratively by the USDA Center for Veterinary Biologics (CVB), manufacturers of leptospirosis vaccines, government agencies, and non-governmental organizations to accelerate broader use of the in vitro approach.
Assuntos
Leptospira , Leptospirose , Animais , Vacinas Bacterianas , Bioensaio , Cricetinae , Leptospirose/prevenção & controle , Leptospirose/veterinária , Estados Unidos , Potência de VacinaRESUMO
Computational modeling grounded in reliable experimental data can help design effective non-animal approaches to predict the eye irritation and corrosion potential of chemicals. The National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM) has compiled and curated a database of in vivo eye irritation studies from the scientific literature and from stakeholder-provided data. The database contains 810 annotated records of 593 unique substances, including mixtures, categorized according to UN GHS and US EPA hazard classifications. This study reports a set of in silico models to predict EPA and GHS hazard classifications for chemicals and mixtures, accounting for purity by setting thresholds of 100% and 10% concentration. We used two approaches to predict classification of mixtures: conventional and mixture-based. Conventional models evaluated substances based on the chemical structure of its major component. These models achieved balanced accuracy in the range of 68-80% and 87-96% for the 100% and 10% test concentration thresholds, respectively. Mixture-based models, which accounted for all known components in the substance by weighted feature averaging, showed similar or slightly higher accuracy of 72-79% and 89-94% for the respective thresholds. We also noted a strong trend between the pH feature metric calculated for each substance and its activity. Across all the models, the calculated pH of inactive substances was within one log10 unit of neutral pH, on average, while for active substances, pH varied from neutral by at least 2 log10 units. This pH dependency is especially important for complex mixtures. Additional evaluation on an external test set of 673 substances obtained from ECHA dossiers achieved balanced accuracies of 64-71%, which suggests that these models can be useful in screening compounds for ocular irritation potential. Negative predictive value was particularly high and indicates the potential application of these models in a bottom-up approach to identify nonirritant substances.
Assuntos
Irritantes , Neuropatia Óptica Tóxica , Alternativas aos Testes com Animais , Animais , Simulação por Computador , Olho , Humanos , Irritantes/toxicidade , Estados Unidos , United States Environmental Protection AgencyRESUMO
Regulatory agencies rely upon rodent in vivo acute oral toxicity data to determine hazard categorization, require appropriate precautionary labeling, and perform quantitative risk assessments. As the field of toxicology moves toward animal-free new approach methodologies (NAMs), there is a pressing need to develop a reliable, robust reference data set to characterize the reproducibility and inherent variability in the in vivo acute oral toxicity test method, which would serve to contextualize results and set expectations regarding NAM performance. Such a data set is also needed for training and evaluating computational models. To meet these needs, rat acute oral LD50 data from multiple databases were compiled, curated, and analyzed to characterize variability and reproducibility of results across a set of up to 2441 chemicals with multiple independent study records. Conditional probability analyses reveal that replicate studies only result in the same hazard categorization on average at 60% likelihood. Although we did not have sufficient study metadata to evaluate the impact of specific protocol components (eg, strain, age, or sex of rat, feed used, treatment vehicle, etc.), studies were assumed to follow standard test guidelines. We investigated, but could not attribute, various chemical properties as the sources of variability (ie, chemical structure, physiochemical properties, functional use). Thus, we conclude that inherent biological or protocol variability likely underlies the variance in the results. Based on the observed variability, we were able to quantify a margin of uncertainty of ±0.24 log10 (mg/kg) associated with discrete in vivo rat acute oral LD50 values.
Assuntos
Reprodutibilidade dos Testes , Animais , Bases de Dados Factuais , Probabilidade , Ratos , Medição de Risco/métodos , Testes de Toxicidade Aguda/métodosRESUMO
Rodent cancer bioassays have been long-required studies for regulatory assessment of human cancer hazard and risk. These studies use hundreds of animals, are resource intensive, and certain aspects of these studies have limited human relevance. The past 10 years have seen an exponential growth of new technologies with the potential to effectively evaluate human cancer hazard and risk while reducing, refining, or replacing animal use. To streamline and facilitate uptake of new technologies, a workgroup comprised of scientists from government, academia, non-governmental organizations, and industry stakeholders developed a framework for waiver rationales of rodent cancer bioassays for consideration in agrochemical safety assessment. The workgroup used an iterative approach, incorporating regulatory agency feedback, and identifying critical information to be considered in a risk assessment-based weight of evidence determination of the need for rodent cancer bioassays. The reporting framework described herein was developed to support a chronic toxicity and carcinogenicity study waiver rationale, which includes information on use pattern(s), exposure scenario(s), pesticidal mode-of-action, physicochemical properties, metabolism, toxicokinetics, toxicological data including mechanistic data, and chemical read-across from similar registered pesticides. The framework could also be applied to endpoints other than chronic toxicity and carcinogenicity, and for chemicals other than agrochemicals.
Assuntos
Neoplasias , Praguicidas , Agroquímicos/toxicidade , Animais , Bioensaio , Testes de Carcinogenicidade , Praguicidas/toxicidade , Medição de Risco , RoedoresRESUMO
The AR-CALUX® in vitro method is a reporter gene-based transactivation method where endocrine active chemicals with androgenic or anti-androgenic potential can be detected. Its primary purpose is for screening chemicals for further prioritization and providing mechanistic (endocrine mode of action) information, as defined by the Organisation of Economic Cooperation and Development (OECD) conceptual framework for the testing and assessment of endocrine-disrupting chemicals. This article describes the conduct and results of an international ring trial with 3 EU-NETVAL laboratories and the test method developer. It was organized by EURL ECVAM to validate the method by testing 46 chemicals. A very good reproducibility within and between laboratories was concluded (94.7-100% and 100% concordance of classification) with low within and between laboratory variability (less than 2.5% CV on EC50 values). Moreover, the variability is within the range of other validated, mechanistically similar methods. In comparison to the AR-reference list compiled by ICCVAM, an almost 100% concordance of classifications was obtained. This method allows the detection of the agonist and antagonist properties of a chemical. A specificity control test was developed during the validation study and added to the antagonist assay rendering the assay more specific. A comparison is made with the mechanistically similar methods AR-EcoScreen™ and 22Rv1/MMTV GR-KO TA. The AR-CALUX® method was approved for inclusion in the recently updated OECD test guideline TG458 which incorporates all 3 methods.
Assuntos
Antagonistas de Receptores de Andrógenos , Androgênios , Androgênios/farmacologia , Receptores Androgênicos/genética , Reprodutibilidade dos Testes , Ativação TranscricionalRESUMO
Monocyte activation tests (MAT) are widely available but rarely used in place of animal-based pyrogen tests for safety assessment of medical devices. To address this issue, the National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods and the PETA International Science Consortium Ltd. convened a workshop at the National Institutes of Health on September 18-19, 2018. Participants included representatives from MAT testing laboratories, medical device manufacturers, the U.S. Food and Drug Administration's Center for Devices and Radiologic Health (CDRH), the U.S. Pharmacopeia, the International Organization for Standardization, and experts in the development of MAT protocols. Discussions covered industry experiences with the MAT, remaining challenges, and how CDRH's Medical Device Development Tools (MDDT) Program, which qualifies tools for use in evaluating medical devices to streamline device development and regulatory evaluation, could be a pathway to qualify the use of MAT in place of the rabbit pyrogen test and the limulus amebocyte lysate test for medical device testing. Workshop outcomes and follow-up activities are discussed.
Assuntos
Equipamentos e Provisões/efeitos adversos , Monócitos/fisiologia , Testes de Toxicidade/métodos , Alternativas aos Testes com Animais , Animais , Endotoxinas , Pirogênios , CoelhosRESUMO
Efforts are underway to develop and implement nonanimal approaches which can characterize acute systemic lethality. A workshop was held in October 2019 to discuss developments in the prediction of acute oral lethality for chemicals and mixtures, as well as progress and needs in the understanding and modeling of mechanisms of acute lethality. During the workshop, each speaker led the group through a series of charge questions to determine clear next steps to progress the aims of the workshop. Participants concluded that a variety of approaches will be needed and should be applied in a tiered fashion. Non-testing approaches, including waiving tests, computational models for single chemicals, and calculating the acute lethality of mixtures based on the LD50 values of mixture components, could be used for some assessments now, especially in the very toxic or non-toxic classification ranges. Agencies can develop policies indicating contexts under which mathematical approaches for mixtures assessment are acceptable; to expand applicability, poorly predicted mixtures should be examined to understand discrepancies and adapt the approach. Transparency and an understanding of the variability of in vivo approaches are crucial to facilitate regulatory application of new approaches. In a replacement strategy, mechanistically based in vitro or in silico models will be needed to support non-testing approaches especially for highly acutely toxic chemicals. The workshop discussed approaches that can be used in the immediate or near term for some applications and identified remaining actions needed to implement approaches to fully replace the use of animals for acute systemic toxicity testing.
Assuntos
Testes de Toxicidade Aguda , Animais , Simulação por Computador , HumanosRESUMO
The Toxicology Forum convened an international state-of-the-science workshop Assessing Chemical Carcinogenicity: Hazard Identification, Classification, and Risk Assessment in December 2020. Challenges related to assessing chemical carcinogenicity were organized under the topics of (1) problem formulation; (2) modes-of-action; (3) dose-response assessment; and (4) the use of new approach methodologies (NAMs). Key topics included the mechanisms of genotoxic and non-genotoxic carcinogenicity and how these in conjunction with consideration of exposure conditions might inform dose-response assessments and an overall risk assessment; approaches to evaluate the human relevance of modes-of-action observed in rodent studies; and the characterization of uncertainties. While the scientific limitations of the traditional rodent chronic bioassay were widely acknowledged, knowledge gaps that need to be overcome to facilitate the further development and uptake of NAMs were also identified. Since one single NAM is unlikely to replace the bioassay, activities to combine NAMs into integrated approaches for testing and assessment, or preferably into defined approaches for testing and assessment that include data interpretation procedures, were identified as urgent research needs. In addition, adverse outcome pathway networks can provide a framework for organizing the available evidence/data for assessing chemical carcinogenicity. Since a formally accepted decision tree to guide use of the best and most current science to advance carcinogenicity risk assessment is currently unavailable, a Decision Matrix for carcinogenicity assessment could be useful. The workshop organizers developed and presented a decision matrix to be considered within a carcinogenicity hazard and risk assessment that is offered in tabular form.
Assuntos
Carcinogênese , Carcinógenos , Bioensaio , Testes de Carcinogenicidade/métodos , Carcinógenos/toxicidade , Humanos , Medição de Risco/métodosRESUMO
Moving toward species-relevant chemical safety assessments and away from animal testing requires access to reliable data to develop and build confidence in new approaches. The Integrated Chemical Environment (ICE) provides tools and curated data centered around chemical safety assessment. This article describes updates to ICE, including improved accessibility and interpretability of in vitro data via mechanistic target mapping and enhanced interactive tools for in vitro to in vivo extrapolation (IVIVE). Mapping of in vitro assay targets to toxicity endpoints of regulatory importance uses literature-based mode-of-action information and controlled terminology from existing knowledge organization systems to support data interoperability with external resources. The most recent ICE update includes Tox21 high-throughput screening data curated using analytical chemistry data and assay-specific parameters to eliminate potential artifacts or unreliable activity. Also included are physicochemical/ADME parameters for over 800,000 chemicals predicted by quantitative structure-activity relationship models. These parameters are used by the new ICE IVIVE tool in combination with the U.S. Environmental Protection Agency's httk R package to estimate in vivo exposures corresponding to in vitro bioactivity concentrations from stored or user-defined assay data. These new ICE features allow users to explore the applications of an expanded data space and facilitate building confidence in non-animal approaches.
Assuntos
Segurança Química , Medição de Risco , Alternativas aos Testes com Animais , Animais , Bases de Dados Factuais , Ensaios de Triagem em Larga Escala , Humanos , Testes de ToxicidadeRESUMO
PURPOSE: OptiSafe is an in chemico test method that identifies potential eye irritants based on macromolecular damage following test chemical exposure. The OptiSafe protocol includes a prescreen assessment that identifies test chemicals that are outside the applicability domain of the test method and thus determines the optimal procedure. We assessed the usefulness and limitations of the OptiSafe test method for identifying chemicals not requiring classification for ocular irritation (i.e. bottom-up testing strategy). MATERIALS AND METHODS: Seventeen chemicals were selected by the lead laboratory and tested as an independent study. Ninety-five unique coded chemicals were selected by a validation management team to assess the intra- and interlaboratory reproducibility and accuracy of OptiSafe in a multilaboratory, three-phased validation study. Three laboratories (lead laboratory and two naïve laboratories) evaluated 35 chemicals, with the remaining 60 chemicals evaluated by the lead laboratory only. Test method performance was assessed by comparing classifications based on OptiSafe results to classifications based on available retrospective in vivo data, using both the EPA and GHS eye irritation hazard classification systems. No prospective in vivo testing was conducted. RESULTS: Phase I testing of five chemicals showed that the method could be transferred to naïve laboratories; within-lab reproducibility ranged from 93% to 100% for both classification systems. Thirty coded chemicals were evaluated in Phase II of the validation study to demonstrate both intra- and interlaboratory reproducibility. Intralaboratory reproducibility for both EPA and GHS classification systems for Phase II of the validation study ranged from 93% to 99%, while interlaboratory reproducibility was 91% for both systems. Test method accuracy for the EPA and GHS classification systems based on results from individual laboratories ranged from 82% to 88% and from 78% to 88%, respectively, among the three laboratories; false negative rates ranged from 0% to 7% (EPA) and 0% to 15% (GHS). When results across all three laboratories were combined based on the majority classification, test method accuracy and false negative rates were 89% and 0%, respectively, for both classification systems, while false positive rates were 25% and 23% for the EPA and GHS classification systems, respectively. Validation study Phase III evaluation of an additional 60 chemicals by the lead laboratory provided a comprehensive assessment of test method accuracy and defined the applicability domain of the method. Based on chemicals tested in Phases II and III by the lead laboratory, test method accuracy was 83% and 79% for the EPA and GHS classification systems, respectively; false negative rates were 4% (EPA) and 0% (GHS); and false positive rates were 40% (EPA) and 42% (GHS). Potential causes of false positives in certain chemical (e.g. ethers and alcohols) or hazard classes are being further investigated. CONCLUSION: The OptiSafe test method is useful for identifying nonsurfactant substances not requiring classification for ocular irritancy. OptiSafe represents a new tool for the in vitro assessment of ocular toxicity in a tiered-testing strategy where chemicals can be initially tested and identified as not requiring hazard classification.
Assuntos
Alternativas aos Testes com Animais , Olho/efeitos dos fármacos , Irritantes/toxicidade , Testes de Toxicidade Aguda/métodos , Concentração de Íons de Hidrogênio , Irritantes/química , Substâncias Macromoleculares/química , Reprodutibilidade dos Testes , Solubilidade , Água/químicaRESUMO
An international expert working group representing 37 organisations (pharmaceutical/biotechnology companies, contract research organisations, academic institutions and regulatory bodies) collaborated in a data sharing exercise to evaluate the utility of two species within regulatory general toxicology studies. Anonymised data on 172 drug candidates (92 small molecules, 46 monoclonal antibodies, 15 recombinant proteins, 13 synthetic peptides and 6 antibody-drug conjugates) were submitted by 18 organisations. The use of one or two species across molecule types, the frequency for reduction to a single species within the package of general toxicology studies, and a comparison of target organ toxicities identified in each species in both short and longer-term studies were determined. Reduction to a single species for longer-term toxicity studies, as used for the development of biologicals (ICHS6(R1) guideline) was only applied for 8/133 drug candidates, but might have been possible for more, regardless of drug modality, as similar target organ toxicity profiles were identified in the short-term studies. However, definition and harmonisation around the criteria for similarity of toxicity profiles is needed to enable wider consideration of these principles. Analysis of a more robust dataset would be required to provide clear, evidence-based recommendations for expansion of these principles to small molecules or other modalities where two species toxicity testing is currently recommended.
Assuntos
Desenvolvimento de Medicamentos , Avaliação Pré-Clínica de Medicamentos/efeitos adversos , Testes de Toxicidade , Animais , Bases de Dados Factuais , Humanos , Medição de RiscoRESUMO
The need to develop new tools and increase capacity to test pharmaceuticals and other chemicals for potential adverse impacts on human health and the environment is an active area of development. Much of this activity was sparked by two reports from the US National Research Council (NRC) of the National Academies of Sciences, Toxicity Testing in the Twenty-first Century: A Vision and a Strategy (2007) and Science and Decisions: Advancing Risk Assessment (2009), both of which advocated for "science-informed decision-making" in the field of human health risk assessment. The response to these challenges for a "paradigm shift" toward using new approach methodologies (NAMS) for safety assessment has resulted in an explosion of initiatives by numerous organizations, but, for the most part, these have been carried out independently and are not coordinated in any meaningful way. To help remedy this situation, a framework that presents a consistent set of criteria, universal across initiatives, to evaluate a NAM's fit-for-purpose was developed by a multi-stakeholder group of industry, academic, and regulatory experts. The goal of this framework is to support greater consistency across existing and future initiatives by providing a structure to collect relevant information to build confidence that will accelerate, facilitate and encourage development of new NAMs that can ultimately be used within the appropriate regulatory contexts. In addition, this framework provides a systematic approach to evaluate the currently-available NAMs and determine their suitability for potential regulatory application. This 3-step evaluation framework along with the demonstrated application with case studies, will help build confidence in the scientific understanding of these methods and their value for chemical assessment and regulatory decision-making.
Assuntos
Tomada de Decisões , Gestão da Segurança , Humanos , Medição de Risco , Testes de ToxicidadeRESUMO
Antibodies are used in a range of research, diagnostic, and regulatory applications. Traditional methods for producing such reagents involve the immunization of animals, which introduces variability into the methods that use them and is not aligned with efforts to replace and reduce animal use. Experts from academia, biotechnology, government, and animal protection organizations met December 3, 2019, at the National Institutes of Health in Bethesda, MD, USA to discuss the status of development and use of animal-free recombinant antibodies and their potential to replace antibodies derived from animals. This paper summarizes the discussion and the actions that resulted to facilitate increased production and use of animal-free recombinant antibodies.