RESUMEN
Virtual control groups (VCGs) in nonclinical toxicity represent the concept of using appropriate historical control data for replacing concurrent control group animals. Historical control data collected from standardized studies can serve as base for constructing VCGs and legacy study reports can be used as a benchmark to evaluate the VCG performance. Replacing concurrent controls of legacy studies with VCGs should ideally reproduce the results of these studies. Based on three four-week rat oral toxicity legacy studies with varying degrees of toxicity findings we developed a concept to evaluate VCG performance on different levels: the ability of VCGs to (i) reproduce statistically significant deviations from the concurrent control, (ii) reproduce test substance-related effects, and (iii) reproduce the conclusion of the toxicity study in terms of threshold dose, target organs, toxicological biomarkers (clinical pathology) and reversibility. Although VCGs have shown a low to moderate ability to reproduce statistical results, the general study conclusions remained unchanged. Our results provide a first indication that carefully selected historical control data can be used to replace concurrent control without impairing the general study conclusion. Additionally, the developed procedures and workflows lay the foundation for the future validation of virtual controls for a use in regulatory toxicology.
Asunto(s)
Grupos Control , Pruebas de Toxicidad , Animales , RatasRESUMEN
The replacement of a proportion of concurrent controls by virtual controls in nonclinical safety studies has gained traction over the last few years. This is supported by foundational work, encouraged by regulators, and aligned with societal expectations regarding the use of animals in research. This paper provides an overview of the points to consider for any institution on the verge of implementing this concept, with emphasis given on database creation, risks, and discipline-specific perspectives.
Asunto(s)
Pruebas de Toxicidad , Toxicología , Animales , Toxicología/métodos , Pruebas de Toxicidad/métodos , Humanos , Bases de Datos Factuales , Medición de RiesgoRESUMEN
The availability of large amounts of high-quality control data from tightly controlled regulated animal safety data has created the idea to re-use these data beyond its classical applications of quality control, identification of treatment-related effects and assessing effect-size relevance for building virtual control groups (VCGs). While the ethical and cost-saving aspects of such a concept are immediately evident, the potential challenges need to be carefully considered to avoid any effect which could lower the sensitivity of an animal study to detect adverse events, safety thresholds, target organs, or biomarkers. In our brief communication, we summarize the current discussion regarding VCGs and propose a path forward how the replacement of concurrent control with VCGs resulting from historical data could be systematically assessed and to come to conclusions regarding the scientific value of the concept.
Asunto(s)
Animales de Laboratorio , Animales , Grupos Control , Control de CalidadRESUMEN
The European Partnership for Alternative Approaches to Animal Testing (EPAA) convened a 'Blue Sky Workshop' on new ideas for non-animal approaches to predict repeated-dose systemic toxicity. The aim of the Workshop was to formulate strategic ideas to improve and increase the applicability, implementation and acceptance of modern non-animal methods to determine systemic toxicity. The Workshop concluded that good progress is being made to assess repeated dose toxicity without animals taking advantage of existing knowledge in toxicology, thresholds of toxicological concern, adverse outcome pathways and read-across workflows. These approaches can be supported by New Approach Methodologies (NAMs) utilising modern molecular technologies and computational methods. Recommendations from the Workshop were based around the needs for better chemical safety assessment: how to strengthen the evidence base for decision making; to develop, standardise and harmonise NAMs for human toxicity; and the improvement in the applicability and acceptance of novel techniques. "Disruptive thinking" is required to reconsider chemical legislation, validation of NAMs and the opportunities to move away from reliance on animal tests. Case study practices and data sharing, ensuring reproducibility of NAMs, were viewed as crucial to the improvement of non-animal test approaches for systemic toxicity.
Asunto(s)
Alternativas a las Pruebas en Animales , Pruebas de Toxicidad , Rutas de Resultados Adversos , Animales , Seguridad Química , Relación Dosis-Respuesta a Droga , HumanosRESUMEN
Drug-induced liver injury (DILI) cannot be accurately predicted by animal models. In addition, currently available in vitro methods do not allow for the estimation of hepatotoxic doses or the determination of an acceptable daily intake (ADI). To overcome this limitation, an in vitro/in silico method was established that predicts the risk of human DILI in relation to oral doses and blood concentrations. This method can be used to estimate DILI risk if the maximal blood concentration (Cmax) of the test compound is known. Moreover, an ADI can be estimated even for compounds without information on blood concentrations. To systematically optimize the in vitro system, two novel test performance metrics were introduced, the toxicity separation index (TSI) which quantifies how well a test differentiates between hepatotoxic and non-hepatotoxic compounds, and the toxicity estimation index (TEI) which measures how well hepatotoxic blood concentrations in vivo can be estimated. In vitro test performance was optimized for a training set of 28 compounds, based on TSI and TEI, demonstrating that (1) concentrations where cytotoxicity first becomes evident in vitro (EC10) yielded better metrics than higher toxicity thresholds (EC50); (2) compound incubation for 48 h was better than 24 h, with no further improvement of TSI after 7 days incubation; (3) metrics were moderately improved by adding gene expression to the test battery; (4) evaluation of pharmacokinetic parameters demonstrated that total blood compound concentrations and the 95%-population-based percentile of Cmax were best suited to estimate human toxicity. With a support vector machine-based classifier, using EC10 and Cmax as variables, the cross-validated sensitivity, specificity and accuracy for hepatotoxicity prediction were 100, 88 and 93%, respectively. Concentrations in the culture medium allowed extrapolation to blood concentrations in vivo that are associated with a specific probability of hepatotoxicity and the corresponding oral doses were obtained by reverse modeling. Application of this in vitro/in silico method to the rat hepatotoxicant pulegone resulted in an ADI that was similar to values previously established based on animal experiments. In conclusion, the proposed method links oral doses and blood concentrations of test compounds to the probability of hepatotoxicity.
Asunto(s)
Enfermedad Hepática Inducida por Sustancias y Drogas/diagnóstico , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos/diagnóstico , Administración Oral , Algoritmos , Animales , Línea Celular , Supervivencia Celular/efectos de los fármacos , Simulación por Computador , Expresión Génica/efectos de los fármacos , Hepatocitos/efectos de los fármacos , Humanos , Técnicas In Vitro , Dosis Máxima Tolerada , Preparaciones Farmacéuticas/administración & dosificación , Preparaciones Farmacéuticas/sangre , Farmacocinética , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Máquina de Vectores de SoporteRESUMEN
Although lack of efficacy is an important cause of late stage attrition in drug development the shortcomings in the translation of toxicities observed during the preclinical development to observations in clinical trials or post-approval is an ongoing topic of research. The concordance between preclinical and clinical safety observations has been analyzed only on relatively small data sets, mostly over short time periods of drug approvals. We therefore explored the feasibility of a big-data analysis on a set of 3,290 approved drugs and formulations for which 1,637,449 adverse events were reported for both humans animal species in regulatory submissions over a period of more than 70 years. The events reported in five species - rat, dog, mouse, rabbit, and cynomolgus monkey - were treated as diagnostic tests for human events and the diagnostic power was computed for each event/species pair using likelihood ratios. The animal-human translation of many key observations is confirmed as being predictive, such as QT prolongation and arrhythmias in dog. Our study confirmed the general predictivity of animal safety observations for humans, but also identified issues of such automated analyses which are on the one hand related to data curation and controlled vocabularies, on the other hand to methodological changes over the course of time.
Asunto(s)
Macrodatos , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos , Preparaciones Farmacéuticas/análisis , Animales , Perros , Humanos , Macaca fascicularis , Ratones , Conejos , RatasRESUMEN
A previously published fragmentation method for making reliable negative in silico predictions has been applied to the problem of predicting skin sensitisation in humans, making use of a dataset of over 2750 chemicals with publicly available skin sensitisation data from 18 in vivo assays. An assay hierarchy was designed to enable the classification of chemicals within this dataset as either sensitisers or non-sensitisers where data from more than one in vivo test was available. The negative prediction approach was validated internally, using a 5-fold cross-validation, and externally, against a proprietary dataset of approximately 1000 chemicals with in vivo reference data shared by members of the pharmaceutical, nutritional, and personal care industries. The negative predictivity for this proprietary dataset was high in all cases (>75%), and the model was also able to identify structural features that resulted in a lower accuracy or a higher uncertainty in the negative prediction, termed misclassified and unclassified features respectively. These features could serve as an aid for further expert assessment of the negative in silico prediction.
Asunto(s)
Dermatitis Alérgica por Contacto , Haptenos , Medición de Riesgo/métodos , Animales , Simulación por Computador , Bases de Datos Factuales , Cobayas , Humanos , RatonesRESUMEN
Dermal contact with chemicals may lead to an inflammatory reaction known as allergic contact dermatitis. Consequently, it is important to assess new and existing chemicals for their skin sensitizing potential and to mitigate exposure accordingly. There is an urgent need to develop quantitative non-animal methods to better predict the potency of potential sensitizers, driven largely by European Union (EU) Regulation 1223/2009, which forbids the use of animal tests for cosmetic ingredients sold in the EU. A Nearest Neighbours in silico model was developed using an in-house dataset of 1096 murine local lymph node (LLNA) studies. The EC3 value (the effective concentration of the test substance producing a threefold increase in the stimulation index compared to controls) of a given chemical was predicted using the weighted average of EC3 values of up to 10 most similar compounds within the same mechanistic space (as defined by activating the same Derek skin sensitization alert). The model was validated using previously unseen internal (n = 45) and external (n = 103) data and accuracy of predictions assessed using a threefold error, fivefold error, European Centre for Ecotoxicology and Toxicology of Chemicals (ECETOC) and Globally Harmonized System of Classification and Labelling of Chemicals (GHS) classifications. In particular, the model predicts the GHS skin sensitization category of compounds well, predicting 64% of chemicals in an external test set within the correct category. Of the remaining chemicals in the previously unseen dataset, 25% were over-predicted (GHS 1A predicted: GHS 1B experimentally) and 11% were under-predicted (GHS 1B predicted: GHS 1A experimentally). Copyright © 2017 John Wiley & Sons, Ltd.
Asunto(s)
Dermatitis Alérgica por Contacto/etiología , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos/etiología , Modelos Biológicos , Preparaciones Farmacéuticas/química , Alternativas al Uso de Animales , Animales , Simulación por Computador , Conjuntos de Datos como Asunto , Ensayo del Nódulo Linfático Local , Ratones , Valor Predictivo de las Pruebas , Reproducibilidad de los Resultados , Relación Estructura-ActividadRESUMEN
Nonclinical safety pharmacology and toxicology testing of drug candidates assess the potential adverse effects caused by the drug in relation to its intended use in humans. Hazards related to a drug have to be identified and the potential risks at the intended exposure have to be evaluated in comparison to the potential benefit of the drug. Preclinical safety is thus an integral part of drug discovery and drug development. It still causes significant attrition during drug development.Therefore, there is a need for smart selection of drug candidates in drug discovery including screening of important safety endpoints. In the recent years,there was significant progress in computational and in vitro technology allowing in silico assessment as well as high-throughput screening of some endpoints at very early stages of discovery. Despite all this progress, in vivo evaluation of drug candidates is still an important part to safety testing. The chapter provides an overview on the most important areas of nonclinical safety screening during drug discovery of small molecules.
Asunto(s)
Descubrimiento de Drogas , Pruebas de Toxicidad , Animales , Cardiotoxicidad , Evaluación Preclínica de Medicamentos , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos , Humanos , Hígado/efectos de los fármacos , Síndromes de Neurotoxicidad/etiologíaRESUMEN
The high-quality in vivo preclinical safety data produced by the pharmaceutical industry during drug development, which follows numerous strict guidelines, are mostly not available in the public domain. These safety data are sometimes published as a condensed summary for the few compounds that reach the market, but the majority of studies are never made public and are often difficult to access in an automated way, even sometimes within the owning company itself. It is evident from many academic and industrial examples, that useful data mining and model development requires large and representative data sets and careful curation of the collected data. In 2010, under the auspices of the Innovative Medicines Initiative, the eTOX project started with the objective of extracting and sharing preclinical study data from paper or pdf archives of toxicology departments of the 13 participating pharmaceutical companies and using such data for establishing a detailed, well-curated database, which could then serve as source for read-across approaches (early assessment of the potential toxicity of a drug candidate by comparison of similar structure and/or effects) and training of predictive models. The paper describes the efforts undertaken to allow effective data sharing intellectual property (IP) protection and set up of adequate controlled vocabularies) and to establish the database (currently with over 4000 studies contributed by the pharma companies corresponding to more than 1400 compounds). In addition, the status of predictive models building and some specific features of the eTOX predictive system (eTOXsys) are presented as decision support knowledge-based tools for drug development process at an early stage.
Asunto(s)
Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos/etiología , Preparaciones Farmacéuticas/química , Simulación por Computador , Minería de Datos , Bases de Datos Farmacéuticas , Descubrimiento de Drogas , Humanos , Modelos Biológicos , Vocabulario ControladoRESUMEN
Virtual control groups (VCGs) created from historical control data (HCD) can reduce the number of concurrent control group animals needed in regulatory toxicity studies by up to 25%. This study investigates the performance of VCGs on statistical outcomes of body weight development between treatment and control groups in legacy studies. The objective is to reproduce the statistical outcomes of 28-day sub-chronic studies (legacy studies) after replacing the concurrent control group with virtual ones. In rodent toxicity studies initial body weight is used as surrogate for the age of animals. For the assessment of VCG-sampling methods, three different approaches were explored: (i) sampling VCGs from the entire HCD, ignoring initial body weight information of the legacy study, (ii) sampling from HCD by matching the legacy study's initial body weights, and (iii) sampling from HCD with assigned statistical weights derived from legacy study initial body weight information. The ability to reproduce statistical outcomes using virtual controls was determined by the congruence between the legacy study and the HCD weight distribution: regardless of the chosen approach, the ability to reproduce statistical outcomes was high for VCGs when the legacy study's initial body weight distribution was similar to the HCD's. When the initial body weight range of the legacy study was at the extreme ends of the HCD's distribution, the weighted sampling approach was superior. This article demonstrates the importance of proper HCD matching by the legacy study's initial body weight and discusses conditions to accurately reproduce body weight development.
Animal control data from past studies performed in a standardized manner can be used to create virtual control groups (VCGs) to use in new studies instead of control animals. This approach can reduce the number of study animals needed by up to 25%. This study assessed the performance of VCGs selected by body weight in rat studies. The objective was to reproduce the original study results as closely as possible after replacing the original control group values with VCGs from a pool of historical control values. Several methods for selecting control animal data to create VCGs were compared. Among these, assigning statistical weights to the sampling pool yielded the best performance. Ideally, the body weight distributions on day 1 of the study should be similar between the VCG and the original study animals. This article shows that proper selection of VCGs can yield reliable study data with fewer animals.
Asunto(s)
Peso Corporal , Animales , Grupos Control , Alternativas a las Pruebas en Animales/métodos , Pruebas de Toxicidad/métodos , RatasRESUMEN
Historical data from control groups in animal toxicity studies is currently mainly used for comparative purposes to assess validity and robustness of study results. Due to the highly controlled environment in which the studies are performed and the homogeneity of the animal collectives it has been proposed to use the historical data for building so-called virtual control groups, which could replace partly or entirely the concurrent control. This would constitute a substantial contribution to the reduction of animal use in safety studies. Before the concept can be implemented, the prerequisites regarding data collection, curation and statistical evaluation together with a validation strategy need to be identified to avoid any impairment of the study outcome and subsequent consequences for human risk assessment. To further assess and develop the concept of virtual control groups the transatlantic think tank for toxicology (t4) sponsored a workshop with stakeholders from the pharmaceutical and chemical industry, academia, FDA, pharmaceutical, contract research organizations (CROs), and non-governmental organizations in Washington, which took place in March 2023. This report summarizes the current efforts of a European initiative to share, collect and curate animal control data in a centralized database and the first approaches to identify optimal matching criteria between virtual controls and the treatment arms of a study as well as first reflections about strategies for a qualification procedure and potential pitfalls of the concept.
Animal safety studies are usually performed with three groups of animals where increasing amounts of the test chemical are given to the animals and one control group where the animals do not receive the test chemical. The design of such studies, the characteristics of the animals, and the measured parameters are often very similar from study to study. Therefore, it has been suggested that measurement data from the control groups could be reused from study to study to lower the total number of animals per study. This could reduce animal use by up to 25% for such standardized studies. A workshop was held to discuss the pros and cons of such a concept and what would have to be done to implement it without threatening the reliability of the study outcome or the resulting human risk assessment.
Asunto(s)
Investigación , Animales , Grupos Control , Preparaciones FarmacéuticasRESUMEN
The data landscape in preclinical safety assessment is fundamentally changing because of not only emerging new data types, such as human systems biology, or real-world data (RWD) from clinical trials, but also technological advancements in data-processing software and analytical tools based on deep learning approaches. The recent developments of data science are illustrated with use cases for the three factors: predictive safety (new in silico tools), insight generation (new data for outstanding questions); and reverse translation (extrapolating from clinical experience to resolve preclinical questions). Further advances in this field can be expected if companies focus on overcoming identified challenges related to a lack of platforms and data silos and assuring appropriate training of data scientists within the preclinical safety teams.
Asunto(s)
Ciencia de los Datos , Programas Informáticos , Humanos , Biología de SistemasRESUMEN
Endocrine disruption by environmental chemicals continues to be a concern for human safety. The rat, a widely used model organism in toxicology, is very sensitive to chemical-induced thyroid perturbation, e.g., histopathological alterations in thyroid tissue. Species differences in the susceptibility to thyroid perturbation lead to uncertainty in human safety risk assessments. Hazard identification and characterization of chemically induced thyroid perturbation would therefore benefit from in vitro models addressing different mechanisms of action in a single functional assay, ideally across species. We here introduce a rat thyroid-liver chip that enables simultaneous identification of direct and indirect (liver-mediated) thyroid perturbation on organ-level functions in vitro. A second manuscript describes our work toward a human thyroid-liver chip (Kühnlenz et al., 2022). The presented microfluidic model consisting of primary rat thyroid follicles and liver 3D spheroids maintains a tissue-specific phenotype for up to 21 days. More precisely, the thyroid model exhibits a follicular architecture expressing basolateral and apical markers and secretes T4. Likewise, liver spheroids retain hepatocellular characteristics, e.g., a stable release of albumin and urea, the presence of bile canalicular networks, and the formation of T4-glucuronide. Experiments with reference chemicals demonstrated proficiency to detect direct and indirect mechanisms of thyroid perturbation through decreased thyroid hormone secretion and increased gT4 formation, respectively. Prospectively this rat thyroid-liver chip model, together with its human counterpart, may support a species-specific quantitative in vitro to in vivo extrapolation to improve a data-driven and evidence-based human safety risk assessment with significant contributions to the 3R principles.
Asunto(s)
Roedores , Glándula Tiroides , Humanos , Ratas , Animales , Alternativas a las Pruebas en Animales , HígadoRESUMEN
Thyroid hormones (THs) are crucial regulators of human metabolism and early development. During the safety assessment of plant protection products, the human relevance of chemically induced TH perturbations observed in test animals remains uncertain. European regulatory authorities request follow-up in vitro studies to elucidate human-relevant interferences on thyroid gland function or TH catabolism through hepatic enzyme induction. However, human in vitro assays based on single molecular initiating events poorly reflect the complex TH biology and related liver-thyroid axis. To address this complexity, we present human three-dimensional thyroid and liver organoids with key functions of TH metabolism. The thyroid model resembles in vivo-like follicular architecture and a TSH-dependent triiodothyronine synthesis over 21 days, which is inhibited by methimazole. The HepaRG-based liver model, secreting the critical TH-binding proteins albumin and thyroxine-binding globulin, emulates an active TH catabolism via the formation of glucuronidated and sulfated thyroxine (gT4/sT4). Activation of the nuclear receptors PXR and AHR was demonstrated via the induction of specific CYP isoenzymes by rifampicin, pregnenolone-16α-carbonitrile, and ß-naphthoflavone. However, this nuclear receptor activation, assumed to regulate UDP-glucuronosyltransferases and sulfotransferases, appeared to have no effect on gT4 and sT4 formation in this human-derived hepatic cell line model. Finally, established single-tissue models were successfully co-cultured in a perfused two-organ chip for 21 days. In conclusion, this model presents a first step towards a complex multimodular human platform that will help to identify both direct and indirect thyroid disruptors that are relevant from a human safety perspective.
Asunto(s)
Seguridad Química , Glándula Tiroides , Animales , Humanos , Glándula Tiroides/metabolismo , Microfluídica , Hormonas Tiroideas/metabolismo , Hormonas Tiroideas/farmacología , Hígado , Receptores Citoplasmáticos y Nucleares/metabolismo , Receptores Citoplasmáticos y Nucleares/farmacologíaRESUMEN
INTRODUCTION: BAY1128688 is a selective inhibitor of aldo-keto reductase family 1 member C3 (AKR1C3), an enzyme implicated in the pathology of endometriosis and other disorders. In vivo animal studies suggested a potential therapeutic application of BAY1128688 in treating endometriosis. Early clinical studies in healthy volunteers supported the start of phase IIa. OBJECTIVE: This manuscript reports the results of a clinical trial (AKRENDO1) assessing the effects of BAY1128688 in adult premenopausal women with endometriosis-related pain symptoms over a 12-week treatment period. METHODS: Participants in this placebo-controlled, multicenter phase IIa clinical trial (NCT03373422) were randomized into one of five BAY1128688 treatment groups: 3 mg once daily (OD), 10 mg OD, 30 mg OD, 30 mg twice daily (BID), 60 mg BID; or a placebo group. The efficacy, safety, and tolerability of BAY1128688 were investigated. RESULTS: Dose-/exposure-dependent hepatotoxicity was observed following BAY1128688 treatment, characterized by elevations in serum alanine transferase (ALT) occurring at around 12 weeks of treatment and prompting premature trial termination. The reduced number of valid trial completers precludes conclusions regarding treatment efficacy. The pharmacokinetics and pharmacodynamics of BAY1128688 among participants with endometriosis were comparable with those previously found in healthy volunteers and were not predictive of the subsequent ALT elevations observed. CONCLUSIONS: The hepatotoxicity of BAY1128688 observed in AKRENDO1 was not predicted by animal studies nor by studies in healthy volunteers. However, in vitro interactions of BAY1128688 with bile salt transporters indicated a potential risk factor for hepatotoxicity at higher doses. This highlights the importance of in vitro mechanistic and transporter interaction studies in the assessment of hepatoxicity risk and suggests further mechanistic understanding is required. CLINICAL TRIAL REGISTRATION: NCT03373422 (date registered: November 23, 2017).
Asunto(s)
Enfermedad Hepática Inducida por Sustancias y Drogas , Endometriosis , Humanos , Animales , Femenino , Endometriosis/tratamiento farmacológico , Miembro C3 de la Familia 1 de las Aldo-Ceto Reductasas , Factores de Riesgo , Resultado del Tratamiento , Método Doble CiegoRESUMEN
For decades, preclinical toxicology was essentially a descriptive discipline in which treatment-related effects were carefully reported and used as a basis to calculate safety margins for drug candidates. In recent years, however, technological advances have increasingly enabled researchers to gain insights into toxicity mechanisms, supporting greater understanding of species relevance and translatability to humans, prediction of safety events, mitigation of side effects and development of safety biomarkers. Consequently, investigative (or mechanistic) toxicology has been gaining momentum and is now a key capability in the pharmaceutical industry. Here, we provide an overview of the current status of the field using case studies and discuss the potential impact of ongoing technological developments, based on a survey of investigative toxicologists from 14 European-based medium-sized to large pharmaceutical companies.
Asunto(s)
Industria Farmacéutica , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos , Humanos , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos/prevención & control , Biomarcadores , Tecnología , Evaluación Preclínica de MedicamentosRESUMEN
There is a widespread awareness that the wealth of preclinical toxicity data that the pharmaceutical industry has generated in recent decades is not exploited as efficiently as it could be. Enhanced data availability for compound comparison ("read-across"), or for data mining to build predictive tools, should lead to a more efficient drug development process and contribute to the reduction of animal use (3Rs principle). In order to achieve these goals, a consortium approach, grouping numbers of relevant partners, is required. The eTOX ("electronic toxicity") consortium represents such a project and is a public-private partnership within the framework of the European Innovative Medicines Initiative (IMI). The project aims at the development of in silico prediction systems for organ and in vivo toxicity. The backbone of the project will be a database consisting of preclinical toxicity data for drug compounds or candidates extracted from previously unpublished, legacy reports from thirteen European and European operation-based pharmaceutical companies. The database will be enhanced by incorporation of publically available, high quality toxicology data. Seven academic institutes and five small-to-medium size enterprises (SMEs) contribute with their expertise in data gathering, database curation, data mining, chemoinformatics and predictive systems development. The outcome of the project will be a predictive system contributing to early potential hazard identification and risk assessment during the drug development process. The concept and strategy of the eTOX project is described here, together with current achievements and future deliverables.
Asunto(s)
Bases de Datos Factuales , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos , Sistemas Especialistas , Bases del Conocimiento , Animales , Minería de Datos , Evaluación Preclínica de Medicamentos , Humanos , Difusión de la Información , Medición de RiesgoRESUMEN
Integrative drug safety research in translational health informatics has rapidly evolved and included data that are drawn in from many resources, combining diverse data that are either reused from (curated) repositories, or newly generated at source. Each resource is mandated by different sets of metadata rules that are imposed on the incoming data. Combination of the data cannot be readily achieved without interference of data stewardship and the top-down policy guidelines that supervise and inform the process for data combination to aid meaningful interpretation and analysis of such data. The eTRANSAFE Consortium's effort to drive integrative drug safety research at a large scale hereby present the lessons learnt and the proposal of solution at the guidelines in practice at this Innovative Medicines Initiative (IMI) project. Recommendations in these guidelines were compiled from feedback received from key stakeholders in regulatory agencies, EFPIA companies, and academic partners. The research reproducibility guidelines presented in this study lay the foundation for a comprehensive data sharing and knowledge management plans accounting for research data management in the drug safety space - FAIR data sharing guidelines, and the model verification guidelines as generic deliverables that best practices that can be reused by other scientific community members at large. FAIR data sharing is a dynamic landscape that rapidly evolves with fast-paced technology advancements. The research reproducibility in drug safety guidelines introduced in this study provides a reusable framework that can be adopted by other research communities that aim to integrate public and private data in biomedical research space.
Asunto(s)
Investigación Biomédica , Sector Público , Difusión de la Información , Metadatos , Reproducibilidad de los ResultadosRESUMEN
Pre-competitive data sharing can offer the pharmaceutical industry significant benefits in terms of reducing the time and costs involved in getting a new drug to market through more informed testing strategies and knowledge gained by pooling data. If sufficient data is shared and can be co-analyzed, then it can also offer the potential for reduced animal usage and improvements in the in silico prediction of toxicological effects. Data sharing benefits can be further enhanced by applying the FAIR Guiding Principles, reducing time spent curating, transforming and aggregating datasets and allowing more time for data mining and analysis. We hope to facilitate data sharing by other organizations and initiatives by describing lessons learned as part of the Enhancing TRANslational SAFEty Assessment through Integrative Knowledge Management (eTRANSAFE) project, an Innovative Medicines Initiative (IMI) partnership which aims to integrate publicly available data sources with proprietary preclinical and clinical data donated by pharmaceutical organizations. Methods to foster trust and overcome non-technical barriers to data sharing such as legal and IPR (intellectual property rights) are described, including the security requirements that pharmaceutical organizations generally expect to be met. We share the consensus achieved among pharmaceutical partners on decision criteria to be included in internal clearance procedures used to decide if data can be shared. We also report on the consensus achieved on specific data fields to be excluded from sharing for sensitive preclinical safety and pharmacology data that could otherwise not be shared.