Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Environ Mol Mutagen ; 2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38828778

RESUMO

Exposure levels without appreciable human health risk may be determined by dividing a point of departure on a dose-response curve (e.g., benchmark dose) by a composite adjustment factor (AF). An "effect severity" AF (ESAF) is employed in some regulatory contexts. An ESAF of 10 may be incorporated in the derivation of a health-based guidance value (HBGV) when a "severe" toxicological endpoint, such as teratogenicity, irreversible reproductive effects, neurotoxicity, or cancer was observed in the reference study. Although mutation data have been used historically for hazard identification, this endpoint is suitable for quantitative dose-response modeling and risk assessment. As part of the 8th International Workshops on Genotoxicity Testing, a sub-group of the Quantitative Analysis Work Group (WG) explored how the concept of effect severity could be applied to mutation. To approach this question, the WG reviewed the prevailing regulatory guidance on how an ESAF is incorporated into risk assessments, evaluated current knowledge of associations between germline or somatic mutation and severe disease risk, and mined available data on the fraction of human germline mutations expected to cause severe disease. Based on this review and given that mutations are irreversible and some cause severe human disease, in regulatory settings where an ESAF is used, a majority of the WG recommends applying an ESAF value between 2 and 10 when deriving a HBGV from mutation data. This recommendation may need to be revisited in the future if direct measurement of disease-causing mutations by error-corrected next generation sequencing clarifies selection of ESAF values.

2.
Environ Mol Mutagen ; 2023 Dec 19.
Artigo em Inglês | MEDLINE | ID: mdl-38115239

RESUMO

Quantitative risk assessments of chemicals are routinely performed using in vivo data from rodents; however, there is growing recognition that non-animal approaches can be human-relevant alternatives. There is an urgent need to build confidence in non-animal alternatives given the international support to reduce the use of animals in toxicity testing where possible. In order for scientists and risk assessors to prepare for this paradigm shift in toxicity assessment, standardization and consensus on in vitro testing strategies and data interpretation will need to be established. To address this issue, an Expert Working Group (EWG) of the 8th International Workshop on Genotoxicity Testing (IWGT) evaluated the utility of quantitative in vitro genotoxicity concentration-response data for risk assessment. The EWG first evaluated available in vitro methodologies and then examined the variability and maximal response of in vitro tests to estimate biologically relevant values for the critical effect sizes considered adverse or unacceptable. Next, the EWG reviewed the approaches and computational models employed to provide human-relevant dose context to in vitro data. Lastly, the EWG evaluated risk assessment applications for which in vitro data are ready for use and applications where further work is required. The EWG concluded that in vitro genotoxicity concentration-response data can be interpreted in a risk assessment context. However, prior to routine use in regulatory settings, further research will be required to address the remaining uncertainties and limitations.

3.
Environ Mol Mutagen ; 62(9): 512-525, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34775645

RESUMO

We present a hypothetical case study to examine the use of a next-generation framework developed by the Genetic Toxicology Technical Committee of the Health and Environmental Sciences Institute for assessing the potential risk of genetic damage from a pharmaceutical perspective. We used etoposide, a genotoxic carcinogen, as a representative pharmaceutical for the purposes of this case study. Using the framework as guidance, we formulated a hypothetical scenario for the use of etoposide to illustrate the application of the framework to pharmaceuticals. We collected available data on etoposide considered relevant for assessment of genetic toxicity risk. From the data collected, we conducted a quantitative analysis to estimate margins of exposure (MOEs) to characterize the risk of genetic damage that could be used for decision-making regarding the predefined hypothetical use. We found the framework useful for guiding the selection of appropriate tests and selecting relevant endpoints that reflected the potential for genetic damage in patients. The risk characterization, presented as MOEs, allows decision makers to discern how much benefit is critical to balance any adverse effect(s) that may be induced by the pharmaceutical. Interestingly, pharmaceutical development already incorporates several aspects of the framework per regulations and health authority expectations. Moreover, we observed that quality dose response data can be obtained with carefully planned but routinely conducted genetic toxicity testing. This case study demonstrates the utility of the next-generation framework to quantitatively model human risk based on genetic damage, as applicable to pharmaceuticals.


Assuntos
Antineoplásicos Fitogênicos/efeitos adversos , Etoposídeo/efeitos adversos , Animais , Dano ao DNA , Genômica , Humanos
4.
Environ Mol Mutagen ; 61(1): 94-113, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31709603

RESUMO

We recently published a next generation framework for assessing the risk of genomic damage via exposure to chemical substances. The framework entails a systematic approach with the aim to quantify risk levels for substances that induce genomic damage contributing to human adverse health outcomes. Here, we evaluated the utility of the framework for assessing the risk for industrial chemicals, using the case of benzene. Benzene is a well-studied substance that is generally considered a genotoxic carcinogen and is known to cause leukemia. The case study limits its focus on occupational and general population health as it relates to benzene exposure. Using the framework as guidance, available data on benzene considered relevant for assessment of genetic damage were collected. Based on these data, we were able to conduct quantitative analyses for relevant data sets to estimate acceptable exposure levels and to characterize the risk of genetic damage. Key observations include the need for robust exposure assessments, the importance of information on toxicokinetic properties, and the benefits of cheminformatics. The framework points to the need for further improvement on understanding of the mechanism(s) of action involved, which would also provide support for the use of targeted tests rather than a prescribed set of assays. Overall, this case study demonstrates the utility of the next generation framework to quantitatively model human risk on the basis of genetic damage, thereby enabling a new, innovative risk assessment concept. Environ. Mol. Mutagen. 61:94-113, 2020. © 2019 The Authors. Environmental and Molecular Mutagenesis published by Wiley Periodicals, Inc. on behalf of Environmental Mutagen Society.


Assuntos
Benzeno/toxicidade , Carcinógenos/toxicidade , Mutagênese/efeitos dos fármacos , Mutagênicos/toxicidade , Animais , Benzeno/metabolismo , Carcinógenos/metabolismo , Dano ao DNA/efeitos dos fármacos , Exposição Ambiental/efeitos adversos , Humanos , Leucemia/induzido quimicamente , Leucemia/genética , Testes de Mutagenicidade/métodos , Mutagênicos/metabolismo , Exposição Ocupacional/efeitos adversos , Medição de Risco/métodos
5.
Environ Mol Mutagen ; 58(5): 264-283, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-27650663

RESUMO

For several decades, regulatory testing schemes for genetic damage have been standardized where the tests being utilized examined mutations and structural and numerical chromosomal damage. This has served the genetic toxicity community well when most of the substances being tested were amenable to such assays. The outcome from this testing is usually a dichotomous (yes/no) evaluation of test results, and in many instances, the information is only used to determine whether a substance has carcinogenic potential or not. Over the same time period, mechanisms and modes of action (MOAs) that elucidate a wider range of genomic damage involved in many adverse health outcomes have been recognized. In addition, a paradigm shift in applied genetic toxicology is moving the field toward a more quantitative dose-response analysis and point-of-departure (PoD) determination with a focus on risks to exposed humans. This is directing emphasis on genomic damage that is likely to induce changes associated with a variety of adverse health outcomes. This paradigm shift is moving the testing emphasis for genetic damage from a hazard identification only evaluation to a more comprehensive risk assessment approach that provides more insightful information for decision makers regarding the potential risk of genetic damage to exposed humans. To enable this broader context for examining genetic damage, a next generation testing strategy needs to take into account a broader, more flexible approach to testing, and ultimately modeling, of genomic damage as it relates to human exposure. This is consistent with the larger risk assessment context being used in regulatory decision making. As presented here, this flexible approach for examining genomic damage focuses on testing for relevant genomic effects that can be, as best as possible, associated with an adverse health effect. The most desired linkage for risk to humans would be changes in loci associated with human diseases, whether in somatic or germ cells. The outline of a flexible approach and associated considerations are presented in a series of nine steps, some of which can occur in parallel, which was developed through a collaborative effort by leading genetic toxicologists from academia, government, and industry through the International Life Sciences Institute (ILSI) Health and Environmental Sciences Institute (HESI) Genetic Toxicology Technical Committee (GTTC). The ultimate goal is to provide quantitative data to model the potential risk levels of substances, which induce genomic damage contributing to human adverse health outcomes. Any good risk assessment begins with asking the appropriate risk management questions in a planning and scoping effort. This step sets up the problem to be addressed (e.g., broadly, does genomic damage need to be addressed, and if so, how to proceed). The next two steps assemble what is known about the problem by building a knowledge base about the substance of concern and developing a rational biological argument for why testing for genomic damage is needed or not. By focusing on the risk management problem and potential genomic damage of concern, the next step of assay(s) selection takes place. The work-up of the problem during the earlier steps provides the insight to which assays would most likely produce the most meaningful data. This discussion does not detail the wide range of genomic damage tests available, but points to types of testing systems that can be very useful. Once the assays are performed and analyzed, the relevant data sets are selected for modeling potential risk. From this point on, the data are evaluated and modeled as they are for any other toxicology endpoint. Any observed genomic damage/effects (or genetic event(s)) can be modeled via a dose-response analysis and determination of an estimated PoD. When a quantitative risk analysis is needed for decision making, a parallel exposure assessment effort is performed (exposure assessment is not detailed here as this is not the focus of this discussion; guidelines for this assessment exist elsewhere). Then the PoD for genomic damage is used with the exposure information to develop risk estimations (e.g., using reference dose (RfD), margin of exposure (MOE) approaches) in a risk characterization and presented to risk managers for informing decision making. This approach is applicable now for incorporating genomic damage results into the decision-making process for assessing potential adverse outcomes in chemically exposed humans and is consistent with the ILSI HESI Risk Assessment in the 21st Century (RISK21) roadmap. This applies to any substance to which humans are exposed, including pharmaceuticals, agricultural products, food additives, and other chemicals. It is time for regulatory bodies to incorporate the broader knowledge and insights provided by genomic damage results into the assessments of risk to more fully understand the potential of adverse outcomes in chemically exposed humans, thus improving the assessment of risk due to genomic damage. The historical use of genomic damage data as a yes/no gateway for possible cancer risk has been too narrowly focused in risk assessment. The recent advances in assaying for and understanding genomic damage, including eventually epigenetic alterations, obviously add a greater wealth of information for determining potential risk to humans. Regulatory bodies need to embrace this paradigm shift from hazard identification to quantitative analysis and to incorporate the wider range of genomic damage in their assessments of risk to humans. The quantitative analyses and methodologies discussed here can be readily applied to genomic damage testing results now. Indeed, with the passage of the recent update to the Toxic Substances Control Act (TSCA) in the US, the new generation testing strategy for genomic damage described here provides a regulatory agency (here the US Environmental Protection Agency (EPA), but suitable for others) a golden opportunity to reexamine the way it addresses risk-based genomic damage testing (including hazard identification and exposure). Environ. Mol. Mutagen. 58:264-283, 2017. © 2016 The Authors. Environmental and Molecular Mutagenesis Published by Wiley Periodicals, Inc.


Assuntos
Genômica/métodos , Testes de Mutagenicidade/tendências , Animais , Saúde Ambiental , Humanos , Modelos Teóricos , Testes de Mutagenicidade/normas , Mutagênicos/toxicidade , Medição de Risco
6.
J Food Prot ; 77(8): 1428-40, 2014 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-25198609

RESUMO

Stakeholders in the public health risk analysis community can possess differing opinions about what is meant by "conduct a risk assessment." In reality, there is no one-size-fits-all risk assessment that can address all public health issues, problems, and regulatory needs. Although several international and national organizations (e.g., Codex Alimentarius Commission, Office International des Epizooties, Food and Agricultural Organization, World Health Organization, National Research Council, and European Food Safety Authority) have addressed this issue, confusion remains. The type and complexity of a risk assessment must reflect the risk management needs to appropriately inform a regulatory or nonregulatory decision, i.e., a risk assessment is ideally "fit for purpose" and directly applicable to risk management issues of concern. Frequently however, there is a lack of understanding by those not completely familiar with risk assessment regarding the specific utility of different approaches for assessing public health risks. This unfamiliarity can unduly hamper the acceptance of risk assessment results by risk managers and may reduce the usefulness of such results for guiding public health policies, practices, and operations. Differences in interpretation of risk assessment terminology further complicate effective communication among risk assessors, risk managers, and stakeholders. This article provides an overview of the types of risk assessments commonly conducted, with examples primarily from the food and agricultural sectors, and a discussion of the utility and limitations of these specific approaches for assessing public health risks. Clarification of the risk management issues and corresponding risk assessment design needs during the formative stages of the risk analysis process is a key step for ensuring that the most appropriate assessment of risk is developed and used to guide risk management decisions.


Assuntos
Tomada de Decisões , Saúde Pública , Gestão de Riscos/métodos , Humanos , Medição de Risco
7.
Int J Food Microbiol ; 162(3): 266-75, 2013 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-23454818

RESUMO

This report illustrates how the uncertainty about food safety metrics may influence the selection of a performance objective (PO). To accomplish this goal, we developed a model concerning Listeria monocytogenes in ready-to-eat (RTE) deli meats. This application used a second order Monte Carlo model that simulates L. monocytogenes concentrations through a series of steps: the food-processing establishment, transport, retail, the consumer's home and consumption. The model accounted for growth inhibitor use, retail cross contamination, and applied an FAO/WHO dose response model for evaluating the probability of illness. An appropriate level of protection (ALOP) risk metric was selected as the average risk of illness per serving across all consumed servings-per-annum and the model was used to solve for the corresponding performance objective (PO) risk metric as the maximum allowable L. monocytogenes concentration (cfu/g) at the processing establishment where regulatory monitoring would occur. Given uncertainty about model inputs, an uncertainty distribution of the PO was estimated. Additionally, we considered how RTE deli meats contaminated at levels above the PO would be handled by the industry using three alternative approaches. Points on the PO distribution represent the probability that - if the industry complies with a particular PO - the resulting risk-per-serving is less than or equal to the target ALOP. For example, assuming (1) a target ALOP of -6.41 log10 risk of illness per serving, (2) industry concentrations above the PO that are re-distributed throughout the remaining concentration distribution and (3) no dose response uncertainty, establishment PO's of -4.98 and -4.39 log10 cfu/g would be required for 90% and 75% confidence that the target ALOP is met, respectively. The PO concentrations from this example scenario are more stringent than the current typical monitoring level of an absence in 25 g (i.e., -1.40 log10 cfu/g) or a stricter criteria of absence in 125 g (i.e., -2.1 log10 cfu/g). This example, and others, demonstrates that a PO for L. monocytogenes would be far below any current monitoring capabilities. Furthermore, this work highlights the demands placed on risk managers and risk assessors when applying uncertain risk models to the current risk metric framework.


Assuntos
Contaminação de Alimentos/prevenção & controle , Microbiologia de Alimentos/organização & administração , Listeria monocytogenes/crescimento & desenvolvimento , Carne/microbiologia , Modelos Estatísticos , Gestão de Riscos , Manipulação de Alimentos/normas , Humanos , Listeria monocytogenes/isolamento & purificação , Concentração Máxima Permitida , Produtos da Carne/microbiologia , Método de Monte Carlo , Medição de Risco , Incerteza
9.
J Food Prot ; 72(10): 2151-61, 2009 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-19833039

RESUMO

The U.S. Department of Agriculture, Food Safety and Inspection Service is exploring quantitative risk assessment methodologies to incorporate the use of the Codex Alimentarius' newly adopted risk management metrics (e.g., food safety objectives and performance objectives). It is suggested that use of these metrics would more closely tie the results of quantitative microbial risk assessments (QMRAs) to public health outcomes. By estimating the food safety objective (the maximum frequency and/or concentration of a hazard in a food at the time of consumption) and the performance objective (the maximum frequency and/or concentration of a hazard in a food at a specified step in the food chain before the time of consumption), risk managers will have a better understanding of the appropriate level of protection (ALOP) from microbial hazards for public health protection. We here demonstrate a general methodology that allows identification of an ALOP and evaluation of corresponding metrics at appropriate points in the food chain. It requires a two-dimensional probabilistic risk assessment, the example used being the Monte Carlo QMRA for Clostridium perfringens in ready-to eat and partially cooked meat and poultry products, with minor modifications to evaluate and abstract required measures. For demonstration purposes, the QMRA model was applied specifically to hot dogs produced and consumed in the United States. Evaluation of the cumulative uncertainty distribution for illness rate allows a specification of an ALOP that, with defined confidence, corresponds to current industry practices.


Assuntos
Clostridium perfringens/crescimento & desenvolvimento , Qualidade de Produtos para o Consumidor , Contaminação de Alimentos/análise , Manipulação de Alimentos/métodos , Produtos da Carne/microbiologia , Produtos Avícolas/microbiologia , Contagem de Colônia Microbiana , Culinária/métodos , Microbiologia de Alimentos , Humanos , Modelos Biológicos , Método de Monte Carlo , Medição de Risco , Fatores de Risco , Gestão de Riscos , Estados Unidos , United States Department of Agriculture
10.
Environ Mol Mutagen ; 46(4): 236-45, 2005 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-16258925

RESUMO

Genetic toxicology data are used worldwide in regulatory decision-making. On the 25th anniversary of Environmental and Molecular Mutagenesis, we think it is important to provide a brief overview of the currently available genetic toxicity tests and to outline a framework for conducting weight-of-the-evidence (WOE) evaluations that optimize the utility of genetic toxicology information for risk assessment. There are two major types of regulatory decisions made by agencies such as the Environmental Protection Agency (EPA) and the Food and Drug Administration (FDA): (1) the approval and registration of pesticides, pharmaceuticals, medical devices, and medical-use products, and (2) the setting of standards for acceptable exposure levels in air, water, and food. Genetic toxicology data are utilized for both of these regulatory decisions. The current default assumption for regulatory decisions is that chemicals that are shown to be genotoxic in standard tests are, in fact, capable of causing mutations in humans (in somatic and/or germ cells) and that they contribute to adverse health outcomes via a "genotoxic/mutagenic" mode of action (MOA). The new EPA Guidelines for Carcinogen Risk Assessment [Guidelines for Carcinogen Risk Assessment, USEPA, 2005, EPA Publication No. EPA/630/P-03/001F] emphasize the use of MOA information in risk assessment and provide a framework to help identify a possible mutagenic and/or nonmutagenic MOA for potential adverse effects. An analysis of the available genetic toxicity data is now, more than ever, a key component to consider in the derivation of an MOA for characterizing observed adverse health outcomes such as cancer. We provide our perspective and a two-step strategy for evaluating genotoxicity data for optimal use in regulatory decision-making. The strategy includes integration of all available information and provides, first, for a WOE analysis as to whether a chemical is a mutagen, and second, whether an adverse health outcome is mediated via a mutagenic MOA.


Assuntos
Bases de Dados Genéticas , Relação Dose-Resposta a Droga , Mutagênicos/toxicidade , Toxicologia/métodos , Animais , Técnicas de Apoio para a Decisão , Humanos , Testes de Mutagenicidade , Medição de Risco , Design de Software
11.
Integr Environ Assess Manag ; 1(1): 73-6, 2005 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-16637150

RESUMO

Recently, the U.S. Environmental Protection Agency examined its current risk-assessment principles and practices. As part of the examination, aspects of ecological risk-assessment practices were reviewed. Several issues related to ecological risk assessment were identified, including the use of organism-level versus population-level attributes to characterize risk, the possible opportunities associated with the increased use of probabilistic approaches for ecological risk assessment, and the notion of conservatism in estimating risks. The agency examination provides an understanding of current practices and is intended to begin a dialogue in which the risk assessment community can engage in addressing the identified issues to improve and enhance ecological risk assessment.


Assuntos
Meio Ambiente , Medição de Risco , United States Environmental Protection Agency , Animais , Poluentes Ambientais/toxicidade , Estudos de Avaliação como Assunto , Guias como Assunto , Estados Unidos
12.
Mutat Res ; 521(1-2): 121-35, 2002 Nov 26.
Artigo em Inglês | MEDLINE | ID: mdl-12438010

RESUMO

Recent advances in genetic toxicity (mutagenicity) testing methods and in approaches to performing risk assessment are prompting a renewed effort to harmonize genotoxicity risk assessment across the world. The US Environmental Protection Agency (EPA) first published Guidelines for Mutagenicity Risk Assessment in 1986 that focused mainly on transmissible germ cell genetic risk. Somatic cell genetic risk has also been a risk consideration, usually in support of carcinogenicity assessments. EPA and other international regulatory bodies have published mutagenicity testing requirements for agents (pesticides, pharmaceuticals, etc.) to generate data for use in genotoxicity risk assessments. The scheme that follows provides a proposed harmonization approach in which genotoxicity assessments are fully developed within the risk assessment paradigm used by EPA, and sets out a process that integrates newer thinking in testing battery design with the risk assessment process. A classification strategy for agents based on inherent genotoxicity, dose-responses observed in the data, and an exposure analysis is proposed. The classification leads to an initial level of concern for genotoxic risk to humans. A total risk characterization is performed using all relevant toxicity data and a comprehensive exposure evaluation in association with the genotoxicity data. The result of this characterization is ultimately used to generate a final level of concern for genotoxic risk to humans. The final level of concern and characterized genotoxicity risk assessment are communicated to decision makers for possible regulatory action(s) and to the public.


Assuntos
Testes de Mutagenicidade/métodos , Medição de Risco , Animais , Bactérias/efeitos dos fármacos , Bactérias/genética , Relação Dose-Resposta a Droga , Guias como Assunto , Camundongos , Testes para Micronúcleos , Estados Unidos , United States Environmental Protection Agency/normas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA