RESUMO
In toxicology and regulatory testing, the use of animal methods has been both a cornerstone and a subject of intense debate. To continue this discourse a panel and audience representing scientists from various sectors and countries convened at a workshop held during the 12th World Congress on Alternatives and Animal Use in the Life Sciences (WC-12). The ensuing discussion focused on the scientific and ethical considerations surrounding the necessity and responsibility of defending the creation of new animal data in regulatory testing. The primary aim was to foster an open dialogue between the panel members and the audience while encouraging diverse perspectives on the responsibilities and obligations of various stakeholders (including industry, regulatory bodies, technology developers, research scientists, and animal welfare NGOs) in defending the development and subsequent utilization of new animal data. This workshop summary report captures the key elements from this critical dialogue and collective introspection. It describes the intersection of scientific progress and ethical responsibility as all sectors seek to accelerate the pace of 21st century predictive toxicology and new approach methodologies (NAMs) for the protection of human health and the environment.
Assuntos
Bem-Estar do Animal , Relatório de Pesquisa , Animais , Humanos , Indústrias , Medição de Risco , Alternativas aos Testes com Animais/métodosRESUMO
Rapidly evolving technological methods and mechanistic toxicological understanding have paved the way for new science-based approaches for the determination of chemical safety in support of advancing public health. Approaches including read-across, high-throughput screening, in silico models, and organ-on-a-chip technologies were addressed in a 2017 workshop focusing on how scientists can move effectively toward a vision for 21st century food safety assessments. The application of these alternative methods, the need for further development of standardized practices, and the interpretation and communication of results were addressed. Expert presentations encompassed regulatory, industry, and academic perspectives, and the workshop culminated in a panel discussion in which participants engaged experts about current issues pertaining to the application of alternative methods in toxicological testing for food safety assessments.
Assuntos
Alternativas aos Testes com Animais , Inocuidade dos Alimentos , Testes de Toxicidade/métodos , Medição de RiscoRESUMO
Read-across is a well-established data gap-filling technique applied for regulatory purposes. In US Environmental Protection Agency's New Chemicals Program under TSCA, read-across has been used extensively for decades, however the extent of application and acceptance of read-across among U.S. federal agencies is less clear. In an effort to build read-across capacity, raise awareness of the state of the science, and work towards a harmonization of read-across approaches across U.S. agencies, a new read-across workgroup was established under the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM). This is one of several ad hoc groups ICCVAM has convened to implement the ICCVAM Strategic Roadmap. In this article, we outline the charge and scope of the workgroup and summarize the current applications, tools used, and needs of the agencies represented on the workgroup for read-across. Of the agencies surveyed, the Environmental Protection Agency had the greatest experience in using read-across whereas other agencies indicated that they would benefit from gaining a perspective of the landscape of the tools and available guidance. Two practical case studies are also described to illustrate how the read-across approaches applied by two agencies vary on account of decision context.
Assuntos
Testes de Toxicidade , United States Government Agencies , Humanos , Estados Unidos , United States Environmental Protection Agency/organização & administraçãoRESUMO
Packaging is an indispensable component of the food manufacturing and food supply process. This scientific workshop was convened to bring together scientists from government, academia, and industry to discuss the state of the science regarding the safety of food packaging, prompted by rapidly advancing research to improve food packaging that continues to impact packaging technology, toxicology, exposure, risk assessment, and sustainability. The opening session focused on scientific challenges in the safety assessment of food packaging materials. Experts discussed migration of contaminant residues from food packaging, presented emerging analytical methods for safety evaluation, and highlighted the use of improved exposure assessment models and new packaging technologies. The workshop then focused on recycled packaging and sustainability. Experts also discussed application of recycled materials in food packaging, recycling processes, identification of contaminant residues from recycled packaging, and challenges in safety assessment of recycled materials. The workshop concluded with panel discussions that highlighted the challenges and research gaps in food packaging. Overall, there is a need to better understand and define "contaminants in food packaging" for developing appropriate testing methods needed to establish the significance of the migration levels of these contaminants and conduct appropriate safety assessments in this rapidly evolving field.
Assuntos
Embalagem de Alimentos , Inocuidade dos Alimentos , Contaminação de Alimentos/análise , Contaminação de Alimentos/legislação & jurisprudência , Embalagem de Alimentos/legislação & jurisprudência , Humanos , Legislação sobre Alimentos , Reciclagem , Medição de RiscoRESUMO
In the real world, individuals are exposed to chemicals from sources that vary over space and time. However, traditional risk assessments based on in vivo animal studies typically use a chemical-by-chemical approach and apical disease endpoints. New approach methodologies (NAMs) in toxicology, such as in vitro high-throughput (HTS) assays generated in Tox21 and ToxCast, can more readily provide mechanistic chemical hazard information for chemicals with no existing data than in vivo methods. In this paper, we establish a workflow to assess the joint action of 41 modeled ambient chemical exposures in the air from the USA-wide National Air Toxics Assessment by integrating human exposures with hazard data from curated HTS (cHTS) assays to identify counties where exposure to the local chemical mixture may perturb a common biological target. We exemplify this proof-of-concept using CYP1A1 mRNA up-regulation. We first estimate internal exposure and then convert the inhaled concentration to a steady state plasma concentration using physiologically based toxicokinetic modeling parameterized with county-specific information on ages and body weights. We then use the estimated blood plasma concentration and the concentration-response curve from the in vitro cHTS assay to determine the chemical-specific effects of the mixture components. Three mixture modeling methods were used to estimate the joint effect from exposure to the chemical mixture on the activity levels, which were geospatially mapped. Finally, a Monte Carlo uncertainty analysis was performed to quantify the influence of each parameter on the combined effects. This workflow demonstrates how NAMs can be used to predict early-stage biological perturbations that can lead to adverse health outcomes that result from exposure to chemical mixtures. As a result, this work will advance mixture risk assessment and other early events in the effects of chemicals.
Assuntos
Bioensaio , Exposição Ambiental , Humanos , Animais , Medição de Risco , Método de Monte Carlo , Exposição Ambiental/análiseRESUMO
The U.S. Environmental Protection Agency's Endocrine Disruptor Screening Program (EDSP) is tasked with assessing chemicals for their potential to perturb endocrine pathways, including those controlled by androgen receptor (AR). To address challenges associated with traditional testing strategies, EDSP is considering in vitro high-throughput screening assays to screen and prioritize chemicals more efficiently. The ability of these assays to accurately reflect chemical interactions in nonmammalian species remains uncertain. Therefore, a goal of the EDSP is to evaluate how broadly results can be extrapolated across taxa. To assess the cross-species conservation of AR-modulated pathways, computational analyses and systematic literature review approaches were used to conduct a comprehensive analysis of existing in silico, in vitro, and in vivo data. First, molecular target conservation was assessed across 585 diverse species based on the structural similarity of ARs. These results indicate that ARs are conserved across vertebrates and are predicted to share similarly susceptibility to chemicals that interact with the human AR. Systematic analysis of over 5000 published manuscripts was used to compile in vitro and in vivo cross-species toxicity data. Assessment of in vitro data indicates conservation of responses occurs across vertebrate ARs, with potential differences in sensitivity. Similarly, in vivo data indicate strong conservation of the AR signaling pathways across vertebrate species, although sensitivity may vary. Overall, this study demonstrates a framework for utilizing bioinformatics and existing data to build weight of evidence for cross-species extrapolation and provides a technical basis for extrapolating hAR-based data to prioritize hazard in nonmammalian vertebrate species.
Assuntos
Disruptores Endócrinos , Receptores Androgênicos , Animais , Estados Unidos , Humanos , Receptores Androgênicos/metabolismo , United States Environmental Protection Agency , Sistema Endócrino/química , Sistema Endócrino/metabolismo , Disruptores Endócrinos/toxicidade , Disruptores Endócrinos/química , Ensaios de Triagem em Larga Escala/métodosRESUMO
Regulatory agencies rely upon rodent in vivo acute oral toxicity data to determine hazard categorization, require appropriate precautionary labeling, and perform quantitative risk assessments. As the field of toxicology moves toward animal-free new approach methodologies (NAMs), there is a pressing need to develop a reliable, robust reference data set to characterize the reproducibility and inherent variability in the in vivo acute oral toxicity test method, which would serve to contextualize results and set expectations regarding NAM performance. Such a data set is also needed for training and evaluating computational models. To meet these needs, rat acute oral LD50 data from multiple databases were compiled, curated, and analyzed to characterize variability and reproducibility of results across a set of up to 2441 chemicals with multiple independent study records. Conditional probability analyses reveal that replicate studies only result in the same hazard categorization on average at 60% likelihood. Although we did not have sufficient study metadata to evaluate the impact of specific protocol components (eg, strain, age, or sex of rat, feed used, treatment vehicle, etc.), studies were assumed to follow standard test guidelines. We investigated, but could not attribute, various chemical properties as the sources of variability (ie, chemical structure, physiochemical properties, functional use). Thus, we conclude that inherent biological or protocol variability likely underlies the variance in the results. Based on the observed variability, we were able to quantify a margin of uncertainty of ±0.24 log10 (mg/kg) associated with discrete in vivo rat acute oral LD50 values.
Assuntos
Reprodutibilidade dos Testes , Animais , Bases de Dados Factuais , Probabilidade , Ratos , Medição de Risco/métodos , Testes de Toxicidade Aguda/métodosRESUMO
Humans are exposed to large numbers of chemicals during their daily activities. To assess and understand potential health impacts of chemical exposure, investigators and regulators need access to reliable toxicity data. In particular, reliable toxicity data for a wide range of chemistries are needed to support development of new approach methodologies (NAMs) such as computational models, which offer increased throughput relative to traditional approaches and reduce or replace animal use. NAMs development and evaluation require chemically diverse data sets that are typically constructed by incorporating results from multiple studies into a single, integrated view; however, integrating data is not always a straightforward task. Primary study sources often vary in the way data are organized and reported. Metadata and information needed to support interoperability and provide context are often lacking, which necessitates literature research on the assay prior to attempting data integration. The Integrated Chemical Environment (ICE) was developed to support the development, evaluation, and application of NAMs. ICE provides curated toxicity data and computational tools to integrate and explore available information, thus facilitating knowledge discovery and interoperability. This paper describes the data curation workflow for integrating data into ICE. Data destined for ICE undergo rigorous harmonization, standardization, and formatting processes using both automated and manual expert-driven approaches. These processes improve the utility of the data for diverse analyses and facilitate application within ICE or a user's external workflow while preserving data integrity and context. ICE data curation provides the structure, reliability, and accessibility needed for data to support chemical assessments.
RESUMO
Acute toxicity in silico models are being used to support an increasing number of application areas including (1) product research and development, (2) product approval and registration as well as (3) the transport, storage and handling of chemicals. The adoption of such models is being hindered, in part, because of a lack of guidance describing how to perform and document an in silico analysis. To address this issue, a framework for an acute toxicity hazard assessment is proposed. This framework combines results from different sources including in silico methods and in vitro or in vivo experiments. In silico methods that can assist the prediction of in vivo outcomes (i.e., LD50) are analyzed concluding that predictions obtained using in silico approaches are now well-suited for reliably supporting assessment of LD50-based acute toxicity for the purpose of GHS classification. A general overview is provided of the endpoints from in vitro studies commonly evaluated for predicting acute toxicity (e.g., cytotoxicity/cytolethality as well as assays targeting specific mechanisms). The increased understanding of pathways and key triggering mechanisms underlying toxicity and the increased availability of in vitro data allow for a shift away from assessments solely based on endpoints such as LD50, to mechanism-based endpoints that can be accurately assessed in vitro or by using in silico prediction models. This paper also highlights the importance of an expert review of all available information using weight-of-evidence considerations and illustrates, using a series of diverse practical use cases, how in silico approaches support the assessment of acute toxicity.
RESUMO
In vitro methods offer opportunities to provide mechanistic insight into bioactivity as well as human-relevant toxicological assessments compared to animal testing. One of the challenges for this task is putting in vitro bioactivity data in an in vivo exposure context, for which in vitro to in vivo extrapolation (IVIVE) translates in vitro bioactivity to clinically relevant exposure metrics using reverse dosimetry. This study applies an IVIVE approach to the toxicity assessment of ingredients and their mixtures in e-cigarette (EC) aerosols as a case study. Reported in vitro cytotoxicity data of EC aerosols, as well as in vitro high-throughput screening (HTS) data for individual ingredients in EC liquids (e-liquids) are used. Open-source physiologically based pharmacokinetic (PBPK) models are used to calculate the plasma concentrations of individual ingredients, followed by reverse dosimetry to estimate the human equivalent administered doses (EADs) needed to obtain these plasma concentrations for the total e-liquids. Three approaches (single actor approach, additive effect approach, and outcome-oriented ingredient integration approach) are used to predict EADs of e-liquids considering differential contributions to the bioactivity from the ingredients (humectant carriers [propylene glycol and glycerol], flavors, benzoic acid, and nicotine). The results identified critical factors for the EAD estimation, including the ingredients of the mixture considered to be bioactive, in vitro assay selection, and the data integration approach for mixtures. Further, we introduced the outcome-oriented ingredient integration approach to consider e-liquid ingredients that may lead to a common toxicity outcome (e.g., cytotoxicity), facilitating a quantitative evaluation of in vitro toxicity data in support of human risk assessment.
RESUMO
BACKGROUND: Humans are exposed to tens of thousands of chemical substances that need to be assessed for their potential toxicity. Acute systemic toxicity testing serves as the basis for regulatory hazard classification, labeling, and risk management. However, it is cost- and time-prohibitive to evaluate all new and existing chemicals using traditional rodent acute toxicity tests. In silico models built using existing data facilitate rapid acute toxicity predictions without using animals. OBJECTIVES: The U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) Acute Toxicity Workgroup organized an international collaboration to develop in silico models for predicting acute oral toxicity based on five different end points: Lethal Dose 50 (LD50 value, U.S. Environmental Protection Agency hazard (four) categories, Globally Harmonized System for Classification and Labeling hazard (five) categories, very toxic chemicals [LD50 (LD50≤50mg/kg)], and nontoxic chemicals (LD50>2,000mg/kg). METHODS: An acute oral toxicity data inventory for 11,992 chemicals was compiled, split into training and evaluation sets, and made available to 35 participating international research groups that submitted a total of 139 predictive models. Predictions that fell within the applicability domains of the submitted models were evaluated using external validation sets. These were then combined into consensus models to leverage strengths of individual approaches. RESULTS: The resulting consensus predictions, which leverage the collective strengths of each individual model, form the Collaborative Acute Toxicity Modeling Suite (CATMoS). CATMoS demonstrated high performance in terms of accuracy and robustness when compared with in vivo results. DISCUSSION: CATMoS is being evaluated by regulatory agencies for its utility and applicability as a potential replacement for in vivo rat acute oral toxicity studies. CATMoS predictions for more than 800,000 chemicals have been made available via the National Toxicology Program's Integrated Chemical Environment tools and data sets (ice.ntp.niehs.nih.gov). The models are also implemented in a free, standalone, open-source tool, OPERA, which allows predictions of new and untested chemicals to be made. https://doi.org/10.1289/EHP8495.
Assuntos
Órgãos Governamentais , Animais , Simulação por Computador , Ratos , Testes de Toxicidade Aguda , Estados Unidos , United States Environmental Protection AgencyRESUMO
Multiple US agencies use acute oral toxicity data in a variety of regulatory contexts. One of the ad-hoc groups that the US Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) established to implement the ICCVAM Strategic Roadmap was the Acute Toxicity Workgroup (ATWG) to support the development, acceptance, and actualisation of new approach methodologies (NAMs). One of the ATWG charges was to evaluate in vitro and in silico methods for predicting rat acute systemic toxicity. Collaboratively, the NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM) and the US Environmental Protection Agency (US EPA) collected a large body of rat oral acute toxicity data (~16,713 studies for 11,992 substances) to serve as a reference set to evaluate the performance and coverage of new and existing models as well as build understanding of the inherent variability of the animal data. Here, we focus on evaluating in silico models for predicting the Lethal Dose (LD50) as implemented within two expert systems, TIMES and TEST. The performance and coverage were evaluated against the reference dataset. The performance of both models were similar, but TEST was able to make predictions for more chemicals than TIMES. The subset of the data with multiple (>3) LD50 values was used to evaluate the variability in data and served as a benchmark to compare model performance. Enrichment analysis was conducted using ToxPrint chemical fingerprints to identify the types of chemicals where predictions lay outside the upper 95% confidence interval. Overall, TEST and TIMES models performed similarly but had different chemical features associated with low accuracy predictions, reaffirming that these models are complementary and both worth evaluation when seeking to predict rat LD50 values.
RESUMO
Moving toward species-relevant chemical safety assessments and away from animal testing requires access to reliable data to develop and build confidence in new approaches. The Integrated Chemical Environment (ICE) provides tools and curated data centered around chemical safety assessment. This article describes updates to ICE, including improved accessibility and interpretability of in vitro data via mechanistic target mapping and enhanced interactive tools for in vitro to in vivo extrapolation (IVIVE). Mapping of in vitro assay targets to toxicity endpoints of regulatory importance uses literature-based mode-of-action information and controlled terminology from existing knowledge organization systems to support data interoperability with external resources. The most recent ICE update includes Tox21 high-throughput screening data curated using analytical chemistry data and assay-specific parameters to eliminate potential artifacts or unreliable activity. Also included are physicochemical/ADME parameters for over 800,000 chemicals predicted by quantitative structure-activity relationship models. These parameters are used by the new ICE IVIVE tool in combination with the U.S. Environmental Protection Agency's httk R package to estimate in vivo exposures corresponding to in vitro bioactivity concentrations from stored or user-defined assay data. These new ICE features allow users to explore the applications of an expanded data space and facilitate building confidence in non-animal approaches.
Assuntos
Segurança Química , Medição de Risco , Alternativas aos Testes com Animais , Animais , Bases de Dados Factuais , Ensaios de Triagem em Larga Escala , Humanos , Testes de ToxicidadeRESUMO
BACKGROUND: Low-cost, high-throughput in vitro bioassays have potential as alternatives to animal models for toxicity testing. However, incorporating in vitro bioassays into chemical toxicity evaluations such as read-across requires significant data curation and analysis based on knowledge of relevant toxicity mechanisms, lowering the enthusiasm of using the massive amount of unstructured public data. OBJECTIVE: We aimed to develop a computational method to automatically extract useful bioassay data from a public repository (i.e., PubChem) and assess its ability to predict animal toxicity using a novel bioprofile-based read-across approach. METHODS: A training database containing 7,385 compounds with diverse rat acute oral toxicity data was searched against PubChem to establish in vitro bioprofiles. Using a novel subspace clustering algorithm, bioassay groups that may inform on relevant toxicity mechanisms underlying acute oral toxicity were identified. These bioassays groups were used to predict animal acute oral toxicity using read-across through a cross-validation process. Finally, an external test set of over 600 new compounds was used to validate the resulting model predictivity. RESULTS: Several bioassay clusters showed high predictivity for acute oral toxicity (positive prediction rates range from 62-100%) through cross-validation. After incorporating individual clusters into an ensemble model, chemical toxicants in the external test set were evaluated for putative acute toxicity (positive prediction rate equal to 76%). Additionally, chemical fragment -in vitro-in vivo relationships were identified to illustrate new animal toxicity mechanisms. CONCLUSIONS: The in vitro bioassay data-driven profiling strategy developed in this study meets the urgent needs of computational toxicology in the current big data era and can be extended to develop predictive models for other complex toxicity end points. https://doi.org/10.1289/EHP3614.
Assuntos
Alternativas aos Testes com Animais/estatística & dados numéricos , Biologia Computacional/métodos , Ensaios de Triagem em Larga Escala/métodos , Testes de Toxicidade Aguda/métodos , Animais , Biologia Computacional/instrumentação , Substâncias Perigosas , Humanos , Ratos , Testes de Toxicidade Aguda/instrumentaçãoRESUMO
Programs including the ToxCast project have generated large amounts of in vitro highâthroughput screening (HTS) data, and best approaches for the interpretation and use of HTS data, including for chemical safety assessment, remain to be evaluated. To fill this gap, we conducted case studies of two indirect food additive chemicals where ToxCast data were compared with in vivo toxicity data using the RISK21 approach. Two food contact substances, sodium (2-pyridylthio)-N-oxide and dibutyltin dichloride, were selected, and available exposure data, toxicity data, and model predictions were compiled and assessed. Oral equivalent doses for the ToxCast bioactivity data were determined by in-vitro in-vivo extrapolation (IVIVE). For sodium (2-pyridylthio)-N-oxide, bioactive concentrations in ToxCast assays corresponded to low- and no-observed adverse effect levels in animal studies. For dibutyltin dichloride, the ToxCast bioactive concentrations were below the dose range that demonstrated toxicity in animals; however, this was confounded by the lack of toxicokinetic data, necessitating the use of conservative toxicokinetic parameter estimates for IVIVE calculations. This study highlights the potential utility of the RISK21 approach for interpretation of the ToxCast HTS data, as well as the challenges involved in integrating in vitro HTS data into safety assessments.
Assuntos
Exposição Dietética , Aditivos Alimentares/toxicidade , Medição de Risco/métodos , Testes de Toxicidade/métodos , Animais , Aditivos Alimentares/farmacocinética , Humanos , Estados Unidos , United States Environmental Protection AgencyRESUMO
The median lethal dose for rodent oral acute toxicity (LD50) is a standard piece of information required to categorize chemicals in terms of the potential hazard posed to human health after acute exposure. The exclusive use of in vivo testing is limited by the time and costs required for performing experiments and by the need to sacrifice a number of animals. (Quantitative) structure-activity relationships [(Q)SAR] proved a valid alternative to reduce and assist in vivo assays for assessing acute toxicological hazard. In the framework of a new international collaborative project, the NTP Interagency Center for the Evaluation of Alternative Toxicological Methods and the U.S. Environmental Protection Agency's National Center for Computational Toxicology compiled a large database of rat acute oral LD50 data, with the aim of supporting the development of new computational models for predicting five regulatory relevant acute toxicity endpoints. In this article, a series of regression and classification computational models were developed by employing different statistical and knowledge-based methodologies. External validation was performed to demonstrate the real-life predictability of models. Integrated modeling was then applied to improve performance of single models. Statistical results confirmed the relevance of developed models in regulatory frameworks, and confirmed the effectiveness of integrated modeling. The best integrated strategies reached RMSEs lower than 0.50 and the best classification models reached balanced accuracies over 0.70 for multi-class and over 0.80 for binary endpoints. Computed predictions will be hosted on the EPA's Chemistry Dashboard and made freely available to the scientific community.
RESUMO
An emerging emphasis on mechanism-focused and human-relevant alternatives to animal use in toxicology underlies the toxicology testing in the twenty-first-century initiative. Herein we describe in vitro high-throughput screening programs seeking to address this goal, as well as strategies established to integrate assay results to build weight of evidence in support of hazard assessment. Furthermore, we discuss unique challenges facing the application of such alternatives for assessing immunotoxicity given the complexity of immune responses. Addressing these challenges will require the development of novel in vitro assays that evaluate well-characterized biochemical processes involved in immune response to help inform on putative adverse outcomes in vivo.
Assuntos
Sistema Imunitário/metabolismo , Testes de Toxicidade/história , Testes de Toxicidade/métodos , Animais , Ensaios de Triagem em Larga Escala , História do Século XXI , Humanos , ImunizaçãoRESUMO
The U.S. Environmental Protection Agency Endocrine Disruptor Screening Program and the Organization for Economic Co-operation and Development (OECD) have used the human adrenocarcinoma (H295R) cell-based assay to predict chemical perturbation of androgen and estrogen production. Recently, a high-throughput H295R (HT-H295R) assay was developed as part of the ToxCast program that includes measurement of 11 hormones, including progestagens, corticosteroids, androgens, and estrogens. To date, 2012 chemicals have been screened at 1 concentration; of these, 656 chemicals have been screened in concentration-response. The objectives of this work were to: (1) develop an integrated analysis of chemical-mediated effects on steroidogenesis in the HT-H295R assay and (2) evaluate whether the HT-H295R assay predicts estrogen and androgen production specifically via comparison with the OECD-validated H295R assay. To support application of HT-H295R assay data to weight-of-evidence and prioritization tasks, a single numeric value based on Mahalanobis distances was computed for 654 chemicals to indicate the magnitude of effects on the synthesis of 11 hormones. The maximum mean Mahalanobis distance (maxmMd) values were high for strong modulators (prochloraz, mifepristone) and lower for moderate modulators (atrazine, molinate). Twenty-five of 28 reference chemicals used for OECD validation were screened in the HT-H295R assay, and produced qualitatively similar results, with accuracies of 0.90/0.75 and 0.81/0.91 for increased/decreased testosterone and estradiol production, respectively. The HT-H295R assay provides robust information regarding estrogen and androgen production, as well as additional hormones. The maxmMd from this integrated analysis may provide a data-driven approach to prioritizing lists of chemicals for putative effects on steroidogenesis.
Assuntos
Disruptores Endócrinos/toxicidade , Estrogênios/biossíntese , Ensaios de Triagem em Larga Escala , Testosterona/biossíntese , Linhagem Celular Tumoral , Interpretação Estatística de Dados , Relação Dose-Resposta a Droga , Disruptores Endócrinos/administração & dosagem , Disruptores Endócrinos/classificação , Ensaios de Triagem em Larga Escala/métodos , Ensaios de Triagem em Larga Escala/estatística & dados numéricos , Humanos , Organização para a Cooperação e Desenvolvimento Econômico , Valor Preditivo dos Testes , Reprodutibilidade dos Testes , Estados Unidos , United States Environmental Protection AgencyRESUMO
In early 2018, the Interagency Coordinating Committee for the Validation of Alternative Methods (ICCVAM) published the "Strategic Roadmap for Establishing New Approaches to Evaluate the Safety of Chemicals and Medical Products in the United States" (ICCVAM 2018). Cross-agency federal workgroups have been established to implement this roadmap for various toxicological testing endpoints, with an initial focus on acute toxicity testing. The ICCVAM acute toxicity workgroup (ATWG) helped organize a global collaboration to build predictive in silico models for acute oral systemic toxicity, based on a large dataset of rodent studies and targeted towards regulatory needs identified across federal agencies. Thirty-two international groups across government, industry, and academia participated in the project, culminating in a workshop in April 2018 held at the National Institutes of Health (NIH). At the workshop, computational modelers and regulatory decision makers met to discuss the feasibility of using predictive model outputs for regulatory use in lieu of acute oral systemic toxicity testing. The models were combined to yield consensus predictions which demonstrated excellent performance when compared to the animal data, and workshop outcomes and follow-up activities to make these tools available and put them into practice are discussed here.
RESUMO
Changes in gene expression can help reveal the mechanisms of disease processes and the mode of action for toxicities and adverse effects on cellular responses induced by exposures to chemicals, drugs and environment agents. The U.S. Tox21 Federal collaboration, which currently quantifies the biological effects of nearly 10,000 chemicals via quantitative high-throughput screening(qHTS) in in vitro model systems, is now making an effort to incorporate gene expression profiling into the existing battery of assays. Whole transcriptome analyses performed on large numbers of samples using microarrays or RNA-Seq is currently cost-prohibitive. Accordingly, the Tox21 Program is pursuing a high-throughput transcriptomics (HTT) method that focuses on the targeted detection of gene expression for a carefully selected subset of the transcriptome that potentially can reduce the cost by a factor of 10-fold, allowing for the analysis of larger numbers of samples. To identify the optimal transcriptome subset, genes were sought that are (1) representative of the highly diverse biological space, (2) capable of serving as a proxy for expression changes in unmeasured genes, and (3) sufficient to provide coverage of well described biological pathways. A hybrid method for gene selection is presented herein that combines data-driven and knowledge-driven concepts into one cohesive method. Our approach is modular, applicable to any species, and facilitates a robust, quantitative evaluation of performance. In particular, we were able to perform gene selection such that the resulting set of "sentinel genes" adequately represents all known canonical pathways from Molecular Signature Database (MSigDB v4.0) and can be used to infer expression changes for the remainder of the transcriptome. The resulting computational model allowed us to choose a purely data-driven subset of 1500 sentinel genes, referred to as the S1500 set, which was then augmented using a knowledge-driven selection of additional genes to create the final S1500+ gene set. Our results indicate that the sentinel genes selected can be used to accurately predict pathway perturbations and biological relationships for samples under study.