RESUMEN
The ICH S6(R1) recommendations on safety evaluation of biotherapeutics have led to uncertainty in determining what would constitute a cause for concern that would require genotoxicity testing. A Health and Environmental Sciences Institute's Genetic Toxicology Technical Committee Workgroup was formed to review the current practice of genotoxicity assessment of peptide/protein-related biotherapeutics. There are a number of properties of peptide/protein-related biotherapeutics that distinguish such products from traditional 'small molecule' drugs and need to be taken into consideration when assessing whether genotoxicity testing may be warranted and if so, how to do it appropriately. Case examples were provided by participating companies and decision trees were elaborated to determine whether and when genotoxicity evaluation is needed for peptides containing natural amino acids, non-natural amino acids and other chemical entities and for unconjugated and conjugated proteins. From a scientific point of view, there is no reason for testing peptides containing exclusively natural amino acids irrespective of the manufacturing process. If non-natural amino acids, organic linkers and other non-linker chemical components have already been tested for genotoxicity, there is no need to re-evaluate them when used in different peptide/protein-related biotherapeutics. Unless the peptides have been modified to be able to enter the cells, it is generally more appropriate to evaluate the peptides containing the non-natural amino acids and other non-linker chemical moieties in vivo where the cleavage products can be formed. For linkers, it is important to determine if exposure to reactive forms are likely to occur and from which origin. When the linkers are anticipated to be potential mutagenic impurities they should be evaluated according to ICH M7. If linkers are expected to be catabolic products, it is recommended to test the entire conjugate in vivo, as this would ensure that the relevant 'free' linker forms stemming from in vivo catabolism are tested.
Asunto(s)
Guías como Asunto , Pruebas de Mutagenicidad/métodos , Mutágenos/toxicidad , Péptidos/toxicidad , Animales , Humanos , Mutágenos/efectos adversos , Péptidos/efectos adversos , Péptidos/uso terapéuticoRESUMEN
Genotoxicity hazard identification is part of the impurity qualification process for drug substances and products, the first step of which being the prediction of their potential DNA reactivity using in silico (quantitative) structure-activity relationship (Q)SAR models/systems. This white paper provides information relevant to the development of the draft harmonized tripartite guideline ICH M7 on potentially DNA-reactive/mutagenic impurities in pharmaceuticals and their application in practice. It explains relevant (Q)SAR methodologies as well as the added value of expert knowledge. Moreover, the predictive value of the different methodologies analyzed in two surveys conveyed in the US and European pharmaceutical industry is compared: most pharmaceutical companies used a rule-based expert system as their primary methodology, yielding negative predictivity values of ⩾78% in all participating companies. A further increase (>90%) was often achieved by an additional expert review and/or a second QSAR methodology. Also in the latter case, an expert review was mandatory, especially when conflicting results were obtained. Based on the available data, we concluded that a rule-based expert system complemented by either expert knowledge or a second (Q)SAR model is appropriate. A maximal transparency of the assessment process (e.g. methods, results, arguments of weight-of-evidence approach) achieved by e.g. data sharing initiatives and the use of standards for reporting will enable regulators to fully understand the results of the analysis. Overall, the procedures presented here for structure-based assessment are considered appropriate for regulatory submissions in the scope of ICH M7.
Asunto(s)
Pruebas de Mutagenicidad/métodos , Mutágenos/química , Mutágenos/toxicidad , Simulación por Computador , Daño del ADN , Contaminación de Medicamentos , Industria Farmacéutica/métodos , Relación Estructura-Actividad CuantitativaRESUMEN
The optimal use of historical control data for the interpretation of genotoxicity results was discussed at the 2009 International Workshop on Genotoxicity Testing (IWGT) in Basel, Switzerland. The historical control working group focused mainly on negative control data although positive control data were also considered to be important. Historical control data are typically used for comparison with the concurrent control data as part of the assay acceptance criteria. Historical control data are also important for providing evidence of the technical competence and familiarization of the assay at any given laboratory. Moreover, historical control data are increasingly being used to aid in the interpretation of genetic toxicity assay results. The objective of the working group was to provide generic advice for historical control data that could be applied to all assays rather than to give assay-specific recommendations. In brief, the recommendations include:
Asunto(s)
Pruebas de Mutagenicidad/métodos , Guías como Asunto , Control de CalidadRESUMEN
At the 2009 International Workshop on Genotoxicity Testing in Basel, an expert group gathered to provide guidance on suitable follow-up tests to describe risk when basic in vivo genotoxicity tests have yielded positive results. The working group agreed that non-linear dose-response curves occur in vivo with at least some DNA-reactive agents. Quantitative risk assessment in such cases requires the use of (1) adequate data, i.e., the use of all available data for the selection of reliable in vivo models to be used for quantitative risk assessment, (2) appropriate mathematical models and statistical analysis for characterizing the dose-response relationships and allowing the use of quantitative and dose-response information in the interpretation of results, (3) mode of action (MOA) information for the evaluation and analysis of risk, and (4) reliable assessments of the internal dose across species for deriving acceptable margins of exposure and risk levels. Hence, the elucidation of MOA and understanding of the mechanism underlying the dose-response curve are important components of risk assessment. The group agreed on the need for (i) the development of in vivo assays, especially multi-endpoint, multi-species assays, with emphasis on those applicable to humans, and (ii) consensus about the most appropriate mathematical models and statistical analyses for defining non-linear dose-responses and exposure levels associated with acceptable risk.
Asunto(s)
Pruebas de Mutagenicidad/métodos , Animales , Relación Dosis-Respuesta a Droga , Humanos , Matemática , Modelos Teóricos , Medición de Riesgo , Estadística como AsuntoRESUMEN
We present a hypothetical case study to examine the use of a next-generation framework developed by the Genetic Toxicology Technical Committee of the Health and Environmental Sciences Institute for assessing the potential risk of genetic damage from a pharmaceutical perspective. We used etoposide, a genotoxic carcinogen, as a representative pharmaceutical for the purposes of this case study. Using the framework as guidance, we formulated a hypothetical scenario for the use of etoposide to illustrate the application of the framework to pharmaceuticals. We collected available data on etoposide considered relevant for assessment of genetic toxicity risk. From the data collected, we conducted a quantitative analysis to estimate margins of exposure (MOEs) to characterize the risk of genetic damage that could be used for decision-making regarding the predefined hypothetical use. We found the framework useful for guiding the selection of appropriate tests and selecting relevant endpoints that reflected the potential for genetic damage in patients. The risk characterization, presented as MOEs, allows decision makers to discern how much benefit is critical to balance any adverse effect(s) that may be induced by the pharmaceutical. Interestingly, pharmaceutical development already incorporates several aspects of the framework per regulations and health authority expectations. Moreover, we observed that quality dose response data can be obtained with carefully planned but routinely conducted genetic toxicity testing. This case study demonstrates the utility of the next-generation framework to quantitatively model human risk based on genetic damage, as applicable to pharmaceuticals.
Asunto(s)
Antineoplásicos Fitogénicos/efectos adversos , Etopósido/efectos adversos , Animales , Daño del ADN , Genómica , HumanosRESUMEN
A genotoxic carcinogen, N-nitrosodimethylamine (NDMA), was detected as a synthesis impurity in some valsartan drugs in 2018, and other N-nitrosamines, such as N-nitrosodiethylamine (NDEA), were later detected in other sartan products. N-nitrosamines are pro-mutagens that can react with DNA following metabolism to produce DNA adducts, such as O6 -alkyl-guanine. The adducts can result in DNA replication miscoding errors leading to GC>AT mutations and increased risk of genomic instability and carcinogenesis. Both NDMA and NDEA are known rodent carcinogens in male and female rats. The DNA repair enzyme, methylguanine DNA-methyltransferase can restore DNA integrity via the removal of alkyl groups from guanine in an error-free fashion and this can result in nonlinear dose responses and a point of departure or "practical threshold" for mutation at low doses of exposure. Following International recommendations (ICHM7; ICHQ3C and ICHQ3D), we calculated permissible daily exposures (PDE) for NDMA and NDEA using published rodent cancer bioassay and in vivo mutagenicity data to determine benchmark dose values and define points of departure and adjusted with appropriate uncertainty factors (UFs). PDEs for NDMA were 6.2 and 0.6 µg/person/day for cancer and mutation, respectively, and for NDEA, 2.2 and 0.04 µg/person/day. Both PDEs are higher than the acceptable daily intake values (96 ng for NDMA and 26.5 ng for NDEA) calculated by regulatory authorities using simple linear extrapolation from carcinogenicity data. These PDE calculations using a bench-mark approach provide a more robust assessment of exposure limits compared with simple linear extrapolations and can better inform risk to patients exposed to the contaminated sartans.
Asunto(s)
Aductos de ADN , Exposición a Riesgos Ambientales/análisis , Mutación , Nitrosaminas/toxicidad , Contaminantes Químicos del Agua/toxicidad , Animales , Carcinógenos/toxicidad , Femenino , Masculino , RatasRESUMEN
Cytosine arabinoside (a nucleoside analogue that inhibits the gap-filling step of excision repair), vinblastine (an aneugen that inhibits tubulin polymerisation), 5-fluorouracil (a nucleoside analogue with a steep response profile), and 2-aminoanthracene (a metabolism-dependent reference genotoxin) were tested in the in vitro micronucleus assay with L5178Y mouse lymphoma cells, without cytokinesis block. The four chemicals were independently evaluated in two Sanofi Aventis laboratories, one of which used an image analyser to score micronuclei, while the other scored micronucleated cells manually. Very similar results were obtained in the two laboratories, highlighting the robustness of the assay. The four test chemicals induced significant increases in the incidence of micronucleated cells at concentrations that produced no more than a 55±5% reduction in survival growth, as measured with the three parameters recommended in the draft OECD Test Guideline on In Vitro Mammalian Cell Micronucleus Test (MNvit) for chemical testing, namely the relative increase in cell counts, relative population doubling, and the relative cell count. These results support the premise that the relative increase in cell counts and relative population doubling, that take into account both cell death and cytostasis, are appropriate measures of survival growth reduction in the in vitro micronucleus test conducted in the absence of cytokinesis block, as recommended in MNvit.
Asunto(s)
Pruebas de Micronúcleos/métodos , Mutágenos/toxicidad , Animales , Antracenos/toxicidad , Recuento de Células , Línea Celular Tumoral , Citarabina/toxicidad , Fluorouracilo/toxicidad , Francia , Guías como Asunto , Leucemia L5178/genética , Ratones , Pruebas de Micronúcleos/normas , Vinblastina/toxicidadRESUMEN
A collaborative trial was conducted to evaluate the possibility of integrating the rat-liver Comet assay into repeat-dose toxicity studies. Fourteen laboratories from Europe, Japan and the USA tested fifteen chemicals. Two chemicals had been previously shown to induce micronuclei in an acute protocol, but were found negative in a 4-week Micronucleus (MN) Assay (benzo[a]pyrene and 1,2-dimethylhydrazine; Hamada et al., 2001); four genotoxic rat-liver carcinogens that were negative in the MN assay in bone marrow or blood (2,6-dinitrotoluene, dimethylnitrosamine, 1,2-dibromomethane, and 2-amino-3-methylimidazo[4,5-f]quinoline); three compounds used in the ongoing JaCVAM (Japanese Center for the Validation of Alternative Methods) validation study of the acute liver Comet assay (2,4-diaminotoluene, 2,6-diaminotoluene and acrylamide); three pharmaceutical-like compounds (chlordiazepoxide, pyrimethamine and gemifloxacin), and three non-genotoxic rodent liver carcinogens (methapyrilene, clofibrate and phenobarbital). Male rats received oral administrations of the test compounds, daily for two or four weeks. The top dose was meant to be the highest dose producing clinical signs or histopathological effects without causing mortality, i.e. the 28-day maximum tolerated dose. The liver Comet assay was performed according to published recommendations and following the protocol for the ongoing JaCVAM validation trial. Laboratories provided liver Comet assay data obtained at the end of the long-term (2- or 4-week) studies together with an evaluation of liver histology. Most of the test compounds were also investigated in the liver Comet assay after short-term (1-3 daily) administration to compare the sensitivity of the two study designs. MN analyses were conducted in bone marrow or peripheral blood for most of the compounds to determine whether the liver Comet assay could complement the MN assay for the detection of genotoxins after long-term treatment. Most of the liver genotoxins were positive and the three non-genotoxic carcinogens gave negative result in the liver Comet assay after long-term administration. There was a high concordance between short- and long-term Comet assay results. Most compounds when tested up to the maximum tolerated dose were correctly detected in both short- and long-term studies. Discrepant results were obtained with 2,6 diaminotoluene (negative in the short-term, but positive in the long-term study), phenobarbital (positive in the short-term, but negative in the long-term study) and gemifloxacin (positive in the short-term, but negative in the long-term study). The overall results indicate that the liver Comet assay can be integrated within repeat-dose toxicity studies and efficiently complements the MN assay in detecting genotoxins. Practical aspects of integrating genotoxicity endpoints into repeat-dose studies were evaluated, e.g. by investigating the effect of blood sampling, as typically performed during toxicity studies, on the Comet and MN assays. The bleeding protocols used here did not affect the conclusions of the Comet assay or of the MN assays in blood and bone marrow. Although bleeding generally increased reticulocyte frequencies, the sensitivity of the response in the MN assay was not altered. These findings indicate that all animals in a toxicity study (main-study animals as well as toxicokinetic (TK) satellite animals) could be used for evaluating genotoxicity. However, possible logistical issues with scheduling of the necropsies and the need to conduct electrophoresis promptly after tissue sampling suggest that the use of TK animals could be simpler. The data so far do not indicate that liver proliferation or toxicity confound the results of the liver Comet assay. As was also true for other genotoxicity assays, criteria for evaluation of Comet assay results and statistical analyses differed among laboratories. Whereas comprehensive advice on statistical analysis is available in the literature, agreement is needed on applying consistent criteria.
Asunto(s)
Mutágenos/toxicidad , Animales , Carcinógenos/toxicidad , Ensayo Cometa/métodos , Relación Dosis-Respuesta a Droga , Esquema de Medicación , Hígado/efectos de los fármacos , Masculino , Pruebas de Micronúcleos/métodos , Ratas , Ratas Wistar , Pruebas de ToxicidadRESUMEN
In May 2017, the Health and Environmental Sciences Institute's Genetic Toxicology Technical Committee hosted a workshop to discuss whether mode of action (MOA) investigation is enhanced through the application of the adverse outcome pathway (AOP) framework. As AOPs are a relatively new approach in genetic toxicology, this report describes how AOPs could be harnessed to advance MOA analysis of genotoxicity pathways using five example case studies. Each of these genetic toxicology AOPs proposed for further development includes the relevant molecular initiating events, key events, and adverse outcomes (AOs), identification and/or further development of the appropriate assays to link an agent to these events, and discussion regarding the biological plausibility of the proposed AOP. A key difference between these proposed genetic toxicology AOPs versus traditional AOPs is that the AO is a genetic toxicology endpoint of potential significance in risk characterization, in contrast to an adverse state of an organism or a population. The first two detailed case studies describe provisional AOPs for aurora kinase inhibition and tubulin binding, leading to the common AO of aneuploidy. The remaining three case studies highlight provisional AOPs that lead to chromosome breakage or mutation via indirect DNA interaction (inhibition of topoisomerase II, production of cellular reactive oxygen species, and inhibition of DNA synthesis). These case studies serve as starting points for genotoxicity AOPs that could ultimately be published and utilized by the broader toxicology community and illustrate the practical considerations and evidence required to formalize such AOPs so that they may be applied to genetic toxicity evaluation schemes. Environ. Mol. Mutagen. 61:114-134, 2020. © 2019 Wiley Periodicals, Inc.
Asunto(s)
Rutas de Resultados Adversos , Pruebas de Mutagenicidad , Mutágenos/toxicidad , Aneuploidia , Animales , Aurora Quinasa A/antagonistas & inhibidores , Rotura Cromosómica/efectos de los fármacos , Daño del ADN/efectos de los fármacos , Humanos , Pruebas de Mutagenicidad/métodos , Mutación/efectos de los fármacosRESUMEN
We live in an era of 'big data', where the volume, velocity, and variety of the data being generated is increasingly influencing the way toxicological sciences are practiced. With this in mind, a workgroup was formed for the 2017 International Workshops on Genotoxicity Testing (IWGT) to consider the use of high information content data in genetic toxicology assessments. Presentations were given on adductomics, global transcriptional profiling, error-reduced single-molecule sequencing, and cellular phenotype-based assays, which were identified as methodologies that are relevant to present-day genetic toxicology assessments. Presenters and workgroup members discussed the state of the science for these methodologies, their potential use in genetic toxicology, current limitations, and the future work necessary to advance their utility and application. The session culminated with audience-assisted SWOT (strength, weakness, opportunities, and threats) analyses. The summary report described herein is structured similarly. A major conclusion of the workgroup is that while conventional regulatory genetic toxicology testing has served the public well over the last several decades, it does not provide the throughput that has become necessary in modern times, and it does not generate the mechanistic information that risk assessments ideally take into consideration. The high information content assay platforms that were discussed in this session, as well as others under development, have the potential to address aspect(s) of these issues and to meet new expectations in the field of genetic toxicology.
Asunto(s)
Pruebas de Mutagenicidad/métodos , Animales , Macrodatos , Línea Celular , Aductos de ADN/análisis , Código de Barras del ADN Taxonómico/métodos , Daño del ADN , Minería de Datos , Evaluación Preclínica de Medicamentos , Perfilación de la Expresión Génica/métodos , Secuenciación de Nucleótidos de Alto Rendimiento , Humanos , Procesamiento de Imagen Asistido por Computador , Espectrometría de Masas/métodos , Metaanálisis como Asunto , Ratones , Pruebas de Mutagenicidad/normas , Fenotipo , Imagen Individual de Molécula , Toxicología/métodos , TranscriptomaRESUMEN
A database of 91 chemicals with published data from both transgenic rodent mutation (TGR) and rodent comet assays has been compiled. The objective was to compare the sensitivity of the two assays for detecting genotoxicity. Critical aspects of study design and results were tabulated for each dataset. There were fewer datasets from rats than mice, particularly for the TGR assay, and therefore, results from both species were combined for further analysis. TGR and comet responses were compared in liver and bone marrow (the most commonly studied tissues), and in stomach and colon evaluated either separately or in combination with other GI tract segments. Overall positive, negative, or equivocal test results were assessed for each chemical across the tissues examined in the TGR and comet assays using two approaches: 1) overall calls based on weight of evidence (WoE) and expert judgement, and 2) curation of the data based on a priori acceptability criteria prior to deriving final tissue specific calls. Since the database contains a high prevalence of positive results, overall agreement between the assays was determined using statistics adjusted for prevalence (using AC1 and PABAK). These coefficients showed fair or moderate to good agreement for liver and the GI tract (predominantly stomach and colon data) using WoE, reduced agreement for stomach and colon evaluated separately using data curation, and poor or no agreement for bone marrow using both the WoE and data curation approaches. Confidence in these results is higher for liver than for the other tissues, for which there were less data. Our analysis finds that comet and TGR generally identify the same compounds (mainly potent mutagens) as genotoxic in liver, stomach and colon, but not in bone marrow. However, the current database content precluded drawing assay concordance conclusions for weak mutagens and non-DNA reactive chemicals.
Asunto(s)
Médula Ósea/efectos de los fármacos , Colon/efectos de los fármacos , Ensayo Cometa/métodos , Hígado/efectos de los fármacos , Mutágenos/toxicidad , Mutación , Estómago/efectos de los fármacos , Animales , Animales Modificados Genéticamente , Daño del ADN , Femenino , Masculino , Ratones , Pruebas de Micronúcleos , RatasRESUMEN
The recent revisions of the Organisation for Economic Co-operation and Development (OECD) genetic toxicology test guidelines emphasize the importance of historical negative controls both for data quality and interpretation. The goal of a HESI Genetic Toxicology Technical Committee (GTTC) workgroup was to collect data from participating laboratories and to conduct a statistical analysis to understand and publish the range of values that are normally seen in experienced laboratories using TK6 cells to conduct the in vitro micronucleus assay. Data from negative control samples from in vitro micronucleus assays using TK6 cells from 13 laboratories were collected using a standard collection form. Although in some cases statistically significant differences can be seen within laboratories for different test conditions, they were very small. The mean incidence of micronucleated cells/1000 cells ranged from 3.2/1000 to 13.8/1000. These almost four-fold differences in micronucleus levels cannot be explained by differences in scoring method, presence or absence of exogenous metabolic activation (S9), length of treatment, presence or absence of cytochalasin B or different solvents used as vehicles. The range of means from the four laboratories using flow cytometry methods (3.7-fold: 3.5-12.9 micronucleated cells/1000 cells) was similar to that from the nine laboratories using other scoring methods (4.3-fold: 3.2-13.8 micronucleated cells/1000 cells). No laboratory could be identified as an outlier or as showing unacceptably high variability. Quality Control (QC) methods applied to analyse the intra-laboratory variability showed that there was evidence of inter-experimental variability greater than would be expected by chance (i.e. over-dispersion). However, in general, this was low. This study demonstrates the value of QC methods in helping to analyse the reproducibility of results, building up a 'normal' range of values, and as an aid to identify variability within a laboratory in order to implement processes to maintain and improve uniformity.
Asunto(s)
Núcleo Celular/genética , Proyectos de Investigación/normas , Línea Celular , Humanos , Micronúcleos con Defecto Cromosómico , Pruebas de Micronúcleos , Control de CalidadRESUMEN
Based on the assumption that compounds having similar toxic modes of action induce specific gene expression changes, the toxicity of unknown compounds can be predicted after comparison of their molecular fingerprints with those obtained with compounds of known toxicity. These predictive models will therefore rely on the characterization of marker genes. Toxicogenomics (TGX) also provides mechanistic insight into the mode of toxicity, and can therefore be used as an adjunct to the standard battery of genotoxicity tests. Promising results, highlighting the ability of TGX to differentiate genotoxic from non-genotoxic carcinogens, as well as DNA-reactive from non-DNA reactive genotoxins, have been reported. Additional data suggested the possibility of ranking genotoxins according to the nature of their interactions with DNA. This new approach could contribute to the improvement of risk assessment. TGX could be applied as a follow-up testing strategy in case of positive in vitro genotoxicity findings, and could contribute to improve our ability to identify the molecular mechanism of action and to possibly better assess dose-response curves. TGX has been found to be less sensitive than the standard genotoxicity end-points, probably because it measures the whole cell population response, when compared with standard tests designed to detect rare events in a small number of cells. Further validation will be needed (1) to better link the profiles obtained with TGX to the established genotoxicity end-points, (2) to improve the gene annotation tools, and (3) to standardise study design and data analysis and to better evaluate the impact of variability between platforms and bioinformatics approaches.
Asunto(s)
Toxicogenética/métodos , Toxicogenética/normas , Animales , Carcinógenos/toxicidad , Línea Celular , Expresión Génica/efectos de los fármacos , Ratones , Modelos Genéticos , Pruebas de Mutagenicidad/métodos , Pruebas de Mutagenicidad/normas , Mutágenos/toxicidad , Análisis de Secuencia por Matrices de Oligonucleótidos , Medición de Riesgo/métodos , Medición de Riesgo/normasRESUMEN
Gene expression profiling technology is expected to advance our understanding of genotoxic mechanisms involving direct or indirect interaction with DNA. We exposed human lymphoblastoid TK6 cells to 14 anticancer drugs (vincristine, paclitaxel, etoposide, daunorubicin, camptothecin, amsacrine, cytosine arabinoside, hydroxyurea, methotrexate, 5-fluorouracil, cisplatin, 1-(2-chloroethyl)-3-cyclohexyl-1-nitrosourea (CCNU), 1,3-bis (2-chloroethyl)-1-nitrosourea (BCNU), and bleomycin) for 4-h and examined them immediately or after a 20-h recovery period. Cytotoxicity and genotoxicity, respectively, were evaluated by cell counting and by in vitro micronucleus assay at 24h. Effects on the cell cycle were determined by flow cytometry at 4 and 24h. Gene expression was profiled at both sampling times by using human Affymetrix U133A GeneChips (22K). Bioanalysis was done with Resolver/Rosetta software and an in-house annotation program. Cell cycle analysis and gene expression profiling allowed us to classify the drugs according to their mechanisms of action. The molecular signature is composed of 28 marker genes mainly involved in signal transduction and cell cycle pathways. Our results suggest that these marker genes could be used as a predictive model to classify genotoxins according to their direct or indirect interaction with DNA.
Asunto(s)
Antineoplásicos/toxicidad , Mutágenos/toxicidad , Ciclo Celular/efectos de los fármacos , Línea Celular , Supervivencia Celular/efectos de los fármacos , Perfilación de la Expresión Génica , Humanos , Pruebas de Micronúcleos , Modelos Biológicos , Análisis de Secuencia por Matrices de Oligonucleótidos , Timidina Quinasa/genéticaRESUMEN
In vitro genotoxicity assays are often used to screen and predict whether chemicals might represent mutagenic and carcinogenic risks for humans. Recent discussions have focused on the high rate of positive results in in vitro tests, especially in those assays performed in mammalian cells that are not confirmed in vivo. Currently, there is no general consensus in the scientific community on the interpretation of the significance of positive results from the in vitro genotoxicity assays. To address this issue, the Health and Environmental Sciences Institute (HESI), held an international workshop in June 2006 to discuss the relevance and follow-up of positive results in in vitro genetic toxicity assays. The goals of the meeting were to examine ways to advance the scientific basis for the interpretation of positive findings in in vitro assays, to facilitate the development of follow-up testing strategies and to define criteria for determining the relevance to human health. The workshop identified specific needs in two general categories, i.e., improved testing and improved data interpretation and risk assessment. Recommendations to improve testing included: (1) re-examine the maximum level of cytotoxicity currently required for in vitro tests; (2) re-examine the upper limit concentration for in vitro mammalian studies; (3) develop improved testing strategies using current in vitro assays; (4) define criteria to guide selection of the appropriate follow-up in vivo studies; (5) develop new and more predictive in vitro and in vivo tests. Recommendations for improving interpretation and assessment included: (1) examine the suitability of applying the threshold of toxicological concern concepts to genotoxicity data; (2) develop a structured weight of evidence approach for assessing genotoxic/carcinogenic hazard; and (3) re-examine in vitro and in vivo correlations qualitatively and quantitatively. Conclusions from the workshop highlighted a willingness of scientists from various sectors to change and improve the current paradigm and move from a hazard identification approach to a "realistic" risk-based approach that incorporates information on mechanism of action, kinetics, and human exposure..
Asunto(s)
Interpretación Estadística de Datos , Pruebas de Mutagenicidad , Animales , Relación Dosis-Respuesta a Droga , Estudios de Seguimiento , Humanos , Pruebas de Mutagenicidad/normas , Mutágenos/farmacocinética , Mutágenos/toxicidad , Reproducibilidad de los Resultados , Medición de RiesgoRESUMEN
As part of the Fourth International Workshop on Genotoxicity Testing (IWGT), held 9-10 September 2005 in San Francisco, California, an expert working group on the Comet assay was convened to review and discuss some of the procedures and methods recommended in previous documents. Particular attention was directed at the in vivo rodent, alkaline (pH >13) version of the assay. The aim was to review those protocol areas which were unclear or which required more detail in order to produce a standardized protocol with maximum acceptability by international regulatory agencies. The areas covered were: number of dose levels required, cell isolation techniques, measures of cytotoxicity, scoring of comets (i.e., manually or by image analysis), and the need for historical negative/positive control data. It was decided that a single limit dose was not sufficient although the required number of dose levels was not stipulated. The method of isolating cells was thought not to have a qualitative effect on the assay but more data were needed before a conclusion could be drawn. Concurrent measures of cytotoxicity were required with histopathological examination of tissues for necrosis or apoptosis as the "Gold Standard". As for analysing the comets, the consensus was that image analysis was preferred but not required. Finally, the minimal number of studies required to generate a historical positive or negative control database was not defined; rather the emphasis was placed on demonstrating the stability of the negative/positive control data. It was also agreed that a minimum reporting standard would be developed which would be consistent with OECD in vivo genotoxicity test method guidelines.
Asunto(s)
Ensayo Cometa/métodos , Animales , Separación Celular/métodos , Relación Dosis-Respuesta a Droga , Procesamiento de Imagen Asistido por Computador , RoedoresRESUMEN
Workshop participants agreed that genotoxicity tests in mammalian cells in vitro produce a remarkably high and unacceptable occurrence of irrelevant positive results (e.g. when compared with rodent carcinogenicity). As reported in several recent reviews, the rate of irrelevant positives (i.e. low specificity) for some studies using in vitro methods (when compared to this "gold standard") means that an increased number of test articles are subjected to additional in vivo genotoxicity testing, in many cases before, e.g. the efficacy (in the case of pharmaceuticals) of the compound has been evaluated. If in vitro tests were more predictive for in vivo genotoxicity and carcinogenicity (i.e. fewer false positives) then there would be a significant reduction in the number of animals used. Beyond animal (or human) carcinogenicity as the "gold standard", it is acknowledged that genotoxicity tests provide much information about cellular behaviour, cell division processes and cellular fate to a (geno)toxic insult. Since the disease impact of these effects is seldom known, and a verification of relevant toxicity is normally also the subject of (sub)chronic animal studies, the prediction of in vivo relevant results from in vitro genotoxicity tests is also important for aspects that may not have a direct impact on carcinogenesis as the ultimate endpoint of concern. In order to address the high rate of in vitro false positive results, a 2-day workshop was held at the European Centre for the Validation of Alternative Methods (ECVAM), Ispra, Italy in April 2006. More than 20 genotoxicity experts from academia, government and industry were invited to review data from the currently available cell systems, to discuss whether there exist cells and test systems that have a reduced tendency to false positive results, to review potential modifications to existing protocols and cell systems that might result in improved specificity, and to review the performance of some new test systems that show promise of improved specificity without sacrificing sensitivity. It was concluded that better guidance on the likely mechanisms resulting in positive results that are not biologically relevant for human health, and how to obtain evidence for those mechanisms, is needed both for practitioners and regulatory reviewers. Participants discussed the fact that cell lines commonly used for genotoxicity testing have a number of deficiencies that may contribute to the high false positive rate. These include, amongst others, lack of normal metabolism leading to reliance on exogenous metabolic activation systems (e.g. Aroclor-induced rat S9), impaired p53 function and altered DNA repair capability. The high concentrations of test chemicals (i.e. 10 mM or 5000 microg/ml, unless precluded by solubility or excessive toxicity) and the high levels of cytotoxicity currently required in mammalian cell genotoxicity tests were discussed as further potential sources of false positive results. Even if the goal is to detect carcinogens with short in vitro tests under more or less acute conditions, it does not seem logical to exceed the capabilities of cellular metabolic turnover, activation and defence processes. The concept of "promiscuous activation" was discussed. For numerous mutagens, the decisive in vivo enzymes are missing in vitro. However, if the substrate concentration is increased sufficiently, some other enzymes (that are unimportant in vivo) may take over the activation-leading to the same or a different active metabolite. Since we often do not use the right enzyme systems for positive controls in vitro, we have to rely on their promiscuous activation, i.e. to use excessive concentrations to get an empirical correlation between genotoxicity and carcinogenicity. A thorough review of published and industry data is urgently needed to determine whether the currently required limit concentration of 10mM or 5000 microg/ml, and high levels of cytotoxicity, are necessary for the detection of in vivo genotoxins and DNA-reactive, mutagenic carcinogens. In addition, various measures of cytotoxicity are currently allowable under OECD test guidelines, but there are few comparative data on whether different measures would result in different maximum concentrations for testing. A detailed comparison of cytotoxicity assessment strategies is needed. An assessment of whether test endpoints can be selected that are not intrinsically associated with cytotoxicity, and therefore are less susceptible to artefacts produced by cytotoxicity, should also be undertaken. There was agreement amongst the workshop participants that cell systems which are p53 and DNA-repair proficient, and have defined Phase 1 and Phase 2 metabolism, covering a broad set of enzyme forms, and used within the context of appropriately set limits of concentration and cytotoxicity, offer the best hope for reduced false positives. Whilst there is some evidence that human lymphocytes are less susceptible to false positives than the current rodent cell lines, other cell systems based on HepG2, TK6 and MCL-5 cells, as well as 3D skin models based on primary human keratinocytes also show some promise. Other human cell lines such as HepaRG, and human stem cells (the target for carcinogenicity) have not been used for genotoxicity investigations and should be considered for evaluation. Genetic engineering is also a valuable tool to incorporate missing enzyme systems into target cells. A collaborative research programme is needed to identify, further develop and evaluate new cell systems with appropriate sensitivity but improved specificity. In order to review current data for selection of appropriate top concentrations, measures and levels of cytotoxicity, metabolism, and to be able to improve existing or validate new assay systems, the participants called for the establishment of an expert group to identify the in vivo genotoxins and DNA-reactive, mutagenic carcinogens that we expect our in vitro genotoxicity assays to detect as well as the non-genotoxins and non-carcinogens we expect them not to detect.
Asunto(s)
Pruebas de Mutagenicidad , Animales , Células Cultivadas , Reacciones Falso Positivas , Humanos , Modelos Biológicos , Juego de Reactivos para Diagnóstico , Técnicas de Cultivo de TejidosRESUMEN
The Organization for Economic Cooperation and Development (OECD) recently revised the test guidelines (TGs) for genetic toxicology. This article describes the main issues addressed during the revision process, and the new and consistent recommendations made in the revised TGs for: (1) demonstration of laboratory proficiency; (2) generation and use of robust historical control data; (3) improvement of the statistical power of the tests; (4) selection of top concentration for in vitro assays; (5) consistent data interpretation and determination of whether the result is clearly positive, clearly negative or needs closer consideration; and, (6) consideration of 3R's for in vivo assay design. The revision process resulted in improved consistency among OECD TGs (including the newly developed ones) and more comprehensive recommendations for the conduct and the interpretation of the assays. Altogether, the recommendations made during the revision process should improve the efficiency, by which the data are generated, and the quality and reliability of test results. Environ. Mol. Mutagen. 58:284-295, 2017. © 2017 Wiley Periodicals, Inc.
Asunto(s)
Guías como Asunto , Pruebas de Mutagenicidad/normas , Animales , HumanosRESUMEN
We previously described a multiplexed in vitro genotoxicity assay based on flow cytometric analysis of detergent-liberated nuclei that are simultaneously stained with propidium iodide and labeled with fluorescent antibodies against p53, γH2AX, and phospho-histone H3. Inclusion of a known number of microspheres provides absolute nuclei counts. The work described herein was undertaken to evaluate the interlaboratory transferability of this assay, commercially known as MultiFlow® DNA Damage Kit-p53, γH2AX, Phospho-Histone H3. For these experiments, seven laboratories studied reference chemicals from a group of 84 representing clastogens, aneugens, and nongenotoxicants. TK6 cells were exposed to chemicals in 96-well plates over a range of concentrations for 24 hr. At 4 and 24 hr, cell aliquots were added to the MultiFlow reagent mix and following a brief incubation period flow cytometric analysis occurred, in most cases directly from a 96-well plate via a robotic walk-away data acquisition system. Multiplexed response data were evaluated using two analysis approaches, one based on global evaluation factors (i.e., cutoff values derived from all interlaboratory data), and a second based on multinomial logistic regression that considers multiple biomarkers simultaneously. Both data analysis strategies were devised to categorize chemicals as predominately exhibiting a clastogenic, aneugenic, or nongenotoxic mode of action (MoA). Based on the aggregate 231 experiments that were performed, assay sensitivity, specificity, and concordance in relation to a priori MoA grouping were ≥ 92%. These results are encouraging as they suggest that two distinct data analysis strategies can rapidly and reliably predict new chemicals' predominant genotoxic MoA based on data from an efficient and transferable multiplexed in vitro assay. Environ. Mol. Mutagen. 58:146-161, 2017. © 2017 Wiley Periodicals, Inc.
Asunto(s)
Daño del ADN , Citometría de Flujo/métodos , Laboratorios , Pruebas de Mutagenicidad/métodos , Mutágenos/toxicidad , Aneugénicos/toxicidad , Animales , Técnicas de Cultivo de Célula , Histonas/genética , Humanos , Laboratorios/normas , Modelos Logísticos , Fosforilación , Proyectos Piloto , Reproducibilidad de los Resultados , Robótica , Sensibilidad y Especificidad , Proteína p53 Supresora de Tumor/genéticaRESUMEN
For several decades, regulatory testing schemes for genetic damage have been standardized where the tests being utilized examined mutations and structural and numerical chromosomal damage. This has served the genetic toxicity community well when most of the substances being tested were amenable to such assays. The outcome from this testing is usually a dichotomous (yes/no) evaluation of test results, and in many instances, the information is only used to determine whether a substance has carcinogenic potential or not. Over the same time period, mechanisms and modes of action (MOAs) that elucidate a wider range of genomic damage involved in many adverse health outcomes have been recognized. In addition, a paradigm shift in applied genetic toxicology is moving the field toward a more quantitative dose-response analysis and point-of-departure (PoD) determination with a focus on risks to exposed humans. This is directing emphasis on genomic damage that is likely to induce changes associated with a variety of adverse health outcomes. This paradigm shift is moving the testing emphasis for genetic damage from a hazard identification only evaluation to a more comprehensive risk assessment approach that provides more insightful information for decision makers regarding the potential risk of genetic damage to exposed humans. To enable this broader context for examining genetic damage, a next generation testing strategy needs to take into account a broader, more flexible approach to testing, and ultimately modeling, of genomic damage as it relates to human exposure. This is consistent with the larger risk assessment context being used in regulatory decision making. As presented here, this flexible approach for examining genomic damage focuses on testing for relevant genomic effects that can be, as best as possible, associated with an adverse health effect. The most desired linkage for risk to humans would be changes in loci associated with human diseases, whether in somatic or germ cells. The outline of a flexible approach and associated considerations are presented in a series of nine steps, some of which can occur in parallel, which was developed through a collaborative effort by leading genetic toxicologists from academia, government, and industry through the International Life Sciences Institute (ILSI) Health and Environmental Sciences Institute (HESI) Genetic Toxicology Technical Committee (GTTC). The ultimate goal is to provide quantitative data to model the potential risk levels of substances, which induce genomic damage contributing to human adverse health outcomes. Any good risk assessment begins with asking the appropriate risk management questions in a planning and scoping effort. This step sets up the problem to be addressed (e.g., broadly, does genomic damage need to be addressed, and if so, how to proceed). The next two steps assemble what is known about the problem by building a knowledge base about the substance of concern and developing a rational biological argument for why testing for genomic damage is needed or not. By focusing on the risk management problem and potential genomic damage of concern, the next step of assay(s) selection takes place. The work-up of the problem during the earlier steps provides the insight to which assays would most likely produce the most meaningful data. This discussion does not detail the wide range of genomic damage tests available, but points to types of testing systems that can be very useful. Once the assays are performed and analyzed, the relevant data sets are selected for modeling potential risk. From this point on, the data are evaluated and modeled as they are for any other toxicology endpoint. Any observed genomic damage/effects (or genetic event(s)) can be modeled via a dose-response analysis and determination of an estimated PoD. When a quantitative risk analysis is needed for decision making, a parallel exposure assessment effort is performed (exposure assessment is not detailed here as this is not the focus of this discussion; guidelines for this assessment exist elsewhere). Then the PoD for genomic damage is used with the exposure information to develop risk estimations (e.g., using reference dose (RfD), margin of exposure (MOE) approaches) in a risk characterization and presented to risk managers for informing decision making. This approach is applicable now for incorporating genomic damage results into the decision-making process for assessing potential adverse outcomes in chemically exposed humans and is consistent with the ILSI HESI Risk Assessment in the 21st Century (RISK21) roadmap. This applies to any substance to which humans are exposed, including pharmaceuticals, agricultural products, food additives, and other chemicals. It is time for regulatory bodies to incorporate the broader knowledge and insights provided by genomic damage results into the assessments of risk to more fully understand the potential of adverse outcomes in chemically exposed humans, thus improving the assessment of risk due to genomic damage. The historical use of genomic damage data as a yes/no gateway for possible cancer risk has been too narrowly focused in risk assessment. The recent advances in assaying for and understanding genomic damage, including eventually epigenetic alterations, obviously add a greater wealth of information for determining potential risk to humans. Regulatory bodies need to embrace this paradigm shift from hazard identification to quantitative analysis and to incorporate the wider range of genomic damage in their assessments of risk to humans. The quantitative analyses and methodologies discussed here can be readily applied to genomic damage testing results now. Indeed, with the passage of the recent update to the Toxic Substances Control Act (TSCA) in the US, the new generation testing strategy for genomic damage described here provides a regulatory agency (here the US Environmental Protection Agency (EPA), but suitable for others) a golden opportunity to reexamine the way it addresses risk-based genomic damage testing (including hazard identification and exposure). Environ. Mol. Mutagen. 58:264-283, 2017. © 2016 The Authors. Environmental and Molecular Mutagenesis Published by Wiley Periodicals, Inc.