Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 479
Filtrar
Más filtros

Base de datos
Tipo del documento
Intervalo de año de publicación
1.
ALTEX ; 41(3): 344-362, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39016082

RESUMEN

The Human Exposome Project aims to revolutionize our understanding of how environmental exposures affect human health by systematically cataloging and analyzing the myriad exposures individuals encounter throughout their lives. This initiative draws a parallel with the Human Genome Project, expanding the focus from genetic factors to the dynamic and complex nature of environ-mental interactions. The project leverages advanced methodologies such as omics technologies, biomonitoring, microphysiological systems (MPS), and artificial intelligence (AI), forming the foun-dation of exposome intelligence (EI) to integrate and interpret vast datasets. Key objectives include identifying exposure-disease links, prioritizing hazardous chemicals, enhancing public health and regulatory policies, and reducing reliance on animal testing. The Implementation Moonshot Project for Alternative Chemical Testing (IMPACT), spearheaded by the Center for Alternatives to Animal Testing (CAAT), is a new element in this endeavor, driving the creation of a public-private part-nership toward a Human Exposome Project with a stakeholder forum in 2025. Establishing robust infrastructure, fostering interdisciplinary collaborations, and ensuring quality assurance through sys-tematic reviews and evidence-based frameworks are crucial for the project's success. The expected outcomes promise transformative advancements in precision public health, disease prevention, and a more ethical approach to toxicology. This paper outlines the strategic imperatives, challenges, and opportunities that lie ahead, calling on stakeholders to support and participate in this landmark initiative for a healthier, more sustainable future.


This paper outlines a proposal for a "Human Exposome Project" to comprehensively study how environmental exposures affect human health throughout our lives. The exposome refers to all the environmental factors we are exposed to, from chemicals to diet to stress. The project aims to use advanced technologies like artificial intelligence, lab-grown mini-organs, and detailed biological measurements to map how different exposures impact our health. This could help identify causes of diseases and guide better prevention strategies. Key goals include finding links between spe­cific exposures and health problems, determining which chemicals are most concerning, improving public health policies, and reducing animal testing. The project requires collaboration between researchers, government agencies, companies, and others. While ambitious, this effort could revo­lutionize our understanding of environmental health risks. The potential benefits for improving health and preventing disease make this an important endeavor to a precise and comprehensive approach to public health and disease prevention.


Asunto(s)
Alternativas a las Pruebas en Animales , Exposición a Riesgos Ambientales , Exposoma , Humanos , Animales , Sustancias Peligrosas/toxicidad , Salud Pública , Monitoreo del Ambiente/métodos
3.
Innovation (Camb) ; 5(5): 100658, 2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39071220

RESUMEN

Disagreements about language use are common both between and within fields. Where interests require multidisciplinary collaboration or the field of research has the potential to impact society at large, it becomes critical to minimize these disagreements where possible. The development of diverse intelligent systems, regardless of the substrate (e.g., silicon vs. biology), is a case where both conditions are met. Significant advancements have occurred in the development of technology progressing toward these diverse intelligence systems. Whether progress is silicon based, such as the use of large language models, or through synthetic biology methods, such as the development of organoids, a clear need for a community-based approach to seeking consensus on nomenclature is now vital. Here, we welcome collaboration from the wider scientific community, proposing a pathway forward to achieving this intention, highlighting key terms and fields of relevance, and suggesting potential consensus-making methods to be applied.

4.
bioRxiv ; 2024 Apr 29.
Artículo en Inglés | MEDLINE | ID: mdl-38903103

RESUMEN

The cannabinoid CB2 receptor (CB2R) is a potential therapeutic target for distinct forms of tissue injury and inflammatory diseases. To thoroughly investigate the role of CB2R in pathophysiological conditions and for target validation in vivo, optimal pharmacological tool compounds are essential. Despite the sizable progress in the generation of potent and selective CB2R ligands, pharmacokinetic parameters are often neglected for in vivo studies. Here, we report the generation and characterization of a tetra-substituted pyrazole CB2R full agonist named RNB-61 with high potency (K i 0.13-1.81 nM, depending on species) and a peripherally restricted action due to P-glycoprotein mediated efflux from the brain. 3H and 14C labelled RNB-61 showed apparent K d values < 4 nM towards human CB2R in both cell and tissue experiments. The >6000-fold selectivity over CB1 receptors and negligible off-targets in vitro, combined with high oral bioavailability and suitable systemic pharmacokinetic (PK) properties, prompted the assessment of RNB-61 in a mouse ischemia-reperfusion model of acute kidney injury (AKI) and in a rat model of chronic kidney injury/inflammation and fibrosis (CKI) induced by unilateral ureteral obstruction. RNB-61 exerted dose-dependent nephroprotective and/or antifibrotic effects in the AKI/CKI models. Thus, RNB-61 is an optimal CB2R tool compound for preclinical in vivo studies with superior biophysical and PK properties over generally used CB2R ligands.

5.
ALTEX ; 41(3): 402-424, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38898799

RESUMEN

The webinar series and workshop titled "Trust Your Gut: Establishing Confidence in Gastrointestinal Models ­ An Overview of the State of the Science and Contexts of Use" was co-organized by NICEATM, NIEHS, FDA, EPA, CPSC, DoD, and the Johns Hopkins Center for Alternatives to Animal Testing (CAAT) and hosted at the National Institutes of Health in Bethesda, MD, USA on October 11-12, 2023. New approach methods (NAMs) for assessing issues of gastrointestinal tract (GIT)- related toxicity offer promise in addressing some of the limitations associated with animal-based assessments. GIT NAMs vary in complexity, from two-dimensional monolayer cell line-based systems to sophisticated 3-dimensional organoid systems derived from human primary cells. Despite advances in GIT NAMs, challenges remain in fully replicating the complex interactions and pro­cesses occurring within the human GIT. Presentations and discussions addressed regulatory needs, challenges, and innovations in incorporating NAMs into risk assessment frameworks; explored the state of the science in using NAMs for evaluating systemic toxicity, understanding absorption and pharmacokinetics, evaluating GIT toxicity, and assessing potential allergenicity; and discussed strengths, limitations, and data gaps of GIT NAMs as well as steps needed to establish confidence in these models for use in the regulatory setting.


Non-animal methods to assess whether chemicals may be toxic to the human digestive tract promise to complement or improve on animal-based methods. These approaches, which are based on human or animal cells and/or computer models, are faced with their own technical challenges and need to be shown to predict adverse effects in humans. Regulators are tasked with evaluating submitted data to best protect human health and the environment. A webinar series and workshop brought together scientists from academia, industry, military, and regulatory authorities from dif­ferent countries to discuss how non-animal methods can be integrated into the risk assessment of drugs, food additives, dietary supplements, pesticides, and industrial chemicals for gastrointestinal toxicity.


Asunto(s)
Alternativas a las Pruebas en Animales , Tracto Gastrointestinal , Humanos , Alternativas a las Pruebas en Animales/métodos , Animales , Modelos Biológicos , Medición de Riesgo/métodos , Pruebas de Toxicidad/métodos
7.
ALTEX ; 41(2): 152-178, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38579692

RESUMEN

Developmental neurotoxicity (DNT) testing has seen enormous progress over the last two decades. Preceding even the publication of the animal-based OECD test guideline for DNT testing in 2007, a series of non-animal technology workshops and conferences that started in 2005 has shaped a community that has delivered a comprehensive battery of in vitro test methods (DNT IVB). Its data interpretation is now covered by a very recent OECD guidance (No. 377). Here, we overview the progress in the field, focusing on the evolution of testing strategies, the role of emerging technol­ogies, and the impact of OECD test guidelines on DNT testing. In particular, this is an example of the targeted development of an animal-free testing approach for one of the most complex hazards of chemicals to human health. These developments started literally from a blank slate, with no proposed alternative methods available. Over two decades, cutting-edge science enabled the design of a testing approach that spares animals and enables throughput to address this challenging hazard. While it is evident that the field needs guidance and regulation, the massive economic impact of decreased human cognitive capacity caused by chemical exposure should be prioritized more highly. Beyond this, the claim to fame of DNT in vitro testing is the enormous scientific progress it has brought for understanding the human brain, its development, and how it can be perturbed.


Developmental neurotoxicity (DNT) testing predicts the hazard of exposure to chemicals to human brain development. Comprehensive advanced non-animal testing strategies using cutting-edge technology can now replace animal-based approaches to assess this complex hazard. These strat­egies can assess large numbers of chemicals more accurately and efficiently than the animal-based approach. Recent OECD test guidance has formalized this battery of in vitro test methods for DNT, marking a pivotal achievement in the field. The shift towards non-animal testing reflects both a com­mitment to animal welfare and a growing recognition of the economic and public health impacts associated with impaired cognitive function caused by chemical exposures. These innovations ulti­mately contribute to safer chemical management and better protection of human health, especially during the vulnerable stages of brain development.


Asunto(s)
Síndromes de Neurotoxicidad , Pruebas de Toxicidad , Animales , Alternativas a las Pruebas en Animales , Modelos Animales , Síndromes de Neurotoxicidad/etiología
8.
ALTEX ; 41(2): 179-201, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38629803

RESUMEN

When The Principles of Humane Experimental Technique was published in 1959, authors William Russell and Rex Burch had a modest goal: to make researchers think about what they were doing in the laboratory ­ and to do it more humanely. Sixty years later, their groundbreaking book was celebrated for inspiring a revolution in science and launching a new field: The 3Rs of alternatives to animal experimentation. On November 22, 2019, some pioneering and leading scientists and researchers in the field gathered at the Johns Hopkins Bloomberg School of Public Health in Bal­timore for the 60 Years of the 3Rs Symposium: Lessons Learned and the Road Ahead. The event was sponsored by the Johns Hopkins Center for Alternatives to Animal Testing (CAAT), the Foundation for Chemistry Research and Initiatives, the Alternative Research & Development Foundation (ARDF), the American Cleaning Institute (ACI), the International Fragrance Association (IFRA), the Institute for In Vitro Sciences (IIVS), John "Jack" R. Fowle III, and the Society of Toxicology (SoT). Fourteen pres­entations shared the history behind the groundbreaking publication, international efforts to achieve its aims, stumbling blocks to progress, as well as remarkable achievements. The day was a tribute to Russell and Burch, and a testament to what is possible when people from many walks of life ­ science, government, and industry ­ work toward a common goal.


William Russell and Rex Burch published their book The Principles of Humane Experimental Technique in 1959. The book encouraged researchers to replace animal experiments where it was possible, to refine experiments with animals in order to reduce their suffering, and to reduce the number of animals that had to be used for experiments to the minimum. Sixty years later, a group of pioneering and leading scientists and researchers in the field gathered to share how the publi­cation came about and how the vision inspired international collaborations and successes on many different levels including new laws. The paper includes an overview of important milestones in the history of alternatives to animal experimentation.


Asunto(s)
Experimentación Animal , Alternativas a las Pruebas en Animales , Animales , Alternativas a las Pruebas en Animales/métodos , Bienestar del Animal , Proyectos de Investigación
9.
Stem Cell Reports ; 19(5): 604-617, 2024 May 14.
Artículo en Inglés | MEDLINE | ID: mdl-38670111

RESUMEN

Cell culture technology has evolved, moving from single-cell and monolayer methods to 3D models like reaggregates, spheroids, and organoids, improved with bioengineering like microfabrication and bioprinting. These advancements, termed microphysiological systems (MPSs), closely replicate tissue environments and human physiology, enhancing research and biomedical uses. However, MPS complexity introduces standardization challenges, impacting reproducibility and trust. We offer guidelines for quality management and control criteria specific to MPSs, facilitating reliable outcomes without stifling innovation. Our fit-for-purpose recommendations provide actionable advice for achieving consistent MPS performance.


Asunto(s)
Técnicas de Cultivo de Célula , Humanos , Reproducibilidad de los Resultados , Técnicas de Cultivo de Célula/métodos , Control de Calidad , Organoides/citología , Sistemas Microfisiológicos
10.
Environ Sci Technol ; 58(12): 5267-5278, 2024 Mar 26.
Artículo en Inglés | MEDLINE | ID: mdl-38478874

RESUMEN

Tetrabromobisphenol A (TBBPA), the most extensively utilized brominated flame retardant, has raised growing concerns regarding its environmental and health risks. Neurovascular formation is essential for metabolically supporting neuronal networks. However, previous studies primarily concerned the neuronal injuries of TBBPA, its impact on the neurovascularture, and molecular mechanism, which are yet to be elucidated. In this study, 5, 30, 100, 300 µg/L of TBBPA were administered to Tg (fli1a: eGFP) zebrafish larvae at 2-72 h postfertilization (hpf). The findings revealed that TBBPA impaired cerebral and ocular angiogenesis in zebrafish. Metabolomics analysis showed that TBBPA-treated neuroendothelial cells exhibited disruption of the TCA cycle and the Warburg effect pathway. TBBPA induced a significant reduction in glycolysis and mitochondrial ATP production rates, accompanied by mitochondrial fragmentation and an increase in mitochondrial reactive oxygen species (mitoROS) production in neuroendothelial cells. The supplementation of alpha-ketoglutaric acid, a key metabolite of the TCA cycle, mitigated TBBPA-induced mitochondrial damage, reduced mitoROS production, and restored angiogenesis in zebrafish larvae. Our results suggested that TBBPA exposure impeded neurovascular injury via mitochondrial metabolic perturbation mediated by mitoROS signaling, providing novel insight into the neurovascular toxicity and mode of action of TBBPA.


Asunto(s)
Retardadores de Llama , Bifenilos Polibrominados , Animales , Humanos , Pez Cebra , Células Endoteliales/metabolismo , Bifenilos Polibrominados/toxicidad , Larva/metabolismo , Retardadores de Llama/toxicidad
11.
Metabolites ; 14(2)2024 Jan 31.
Artículo en Inglés | MEDLINE | ID: mdl-38392990

RESUMEN

Metabolomics is emerging as a powerful systems biology approach for improving preclinical drug safety assessment. This review discusses current applications and future trends of metabolomics in toxicology and drug development. Metabolomics can elucidate adverse outcome pathways by detecting endogenous biochemical alterations underlying toxicity mechanisms. Furthermore, metabolomics enables better characterization of human environmental exposures and their influence on disease pathogenesis. Metabolomics approaches are being increasingly incorporated into toxicology studies and safety pharmacology evaluations to gain mechanistic insights and identify early biomarkers of toxicity. However, realizing the full potential of metabolomics in regulatory decision making requires a robust demonstration of reliability through quality assurance practices, reference materials, and interlaboratory studies. Overall, metabolomics shows great promise in strengthening the mechanistic understanding of toxicity, enhancing routine safety screening, and transforming exposure and risk assessment paradigms. Integration of metabolomics with computational, in vitro, and personalized medicine innovations will shape future applications in predictive toxicology.

12.
ALTEX ; 41(2): 273-281, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38215352

RESUMEN

Both because of the shortcomings of existing risk assessment methodologies, as well as newly available tools to predict hazard and risk with machine learning approaches, there has been an emerging emphasis on probabilistic risk assessment. Increasingly sophisticated AI models can be applied to a plethora of exposure and hazard data to obtain not only predictions for particular endpoints but also to estimate the uncertainty of the risk assessment outcome. This provides the basis for a shift from deterministic to more probabilistic approaches but comes at the cost of an increased complexity of the process as it requires more resources and human expertise. There are still challenges to overcome before a probabilistic paradigm is fully embraced by regulators. Based on an earlier white paper (Maertens et al., 2022), a workshop discussed the prospects, challenges and path forward for implementing such AI-based probabilistic hazard assessment. Moving forward, we will see the transition from categorized into probabilistic and dose-dependent hazard outcomes, the application of internal thresholds of toxicological concern for data-poor substances, the acknowledgement of user-friendly open-source software, a rise in the expertise of toxicologists required to understand and interpret artificial intelligence models, and the honest communication of uncertainty in risk assessment to the public.


Probabilistic risk assessment, initially from engineering, is applied in toxicology to understand chemical-related hazards and their consequences. In toxicology, uncertainties abound ­ unclear molecular events, varied proposed outcomes, and population-level assessments for issues like neurodevelopmental disorders. Establishing links between chemical exposures and diseases, especially rare events like birth defects, often demands extensive studies. Existing methods struggle with subtle effects or those affecting specific groups. Future risk assessments must address developmental disease origins, presenting challenges beyond current capabilities. The intricate nature of many toxicological processes, lack of consensus on mechanisms and outcomes, and the need for nuanced population-level assessments highlight the complexities in understanding and quantifying risks associated with chemical exposures in the field of toxicology.


Asunto(s)
Inteligencia Artificial , Toxicología , Animales , Humanos , Alternativas a las Pruebas en Animales , Medición de Riesgo/métodos , Incertidumbre , Toxicología/métodos
13.
Arch Toxicol ; 98(3): 735-754, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38244040

RESUMEN

The rapid progress of AI impacts diverse scientific disciplines, including toxicology, and has the potential to transform chemical safety evaluation. Toxicology has evolved from an empirical science focused on observing apical outcomes of chemical exposure, to a data-rich field ripe for AI integration. The volume, variety and velocity of toxicological data from legacy studies, literature, high-throughput assays, sensor technologies and omics approaches create opportunities but also complexities that AI can help address. In particular, machine learning is well suited to handle and integrate large, heterogeneous datasets that are both structured and unstructured-a key challenge in modern toxicology. AI methods like deep neural networks, large language models, and natural language processing have successfully predicted toxicity endpoints, analyzed high-throughput data, extracted facts from literature, and generated synthetic data. Beyond automating data capture, analysis, and prediction, AI techniques show promise for accelerating quantitative risk assessment by providing probabilistic outputs to capture uncertainties. AI also enables explanation methods to unravel mechanisms and increase trust in modeled predictions. However, issues like model interpretability, data biases, and transparency currently limit regulatory endorsement of AI. Multidisciplinary collaboration is needed to ensure development of interpretable, robust, and human-centered AI systems. Rather than just automating human tasks at scale, transformative AI can catalyze innovation in how evidence is gathered, data are generated, hypotheses are formed and tested, and tasks are performed to usher new paradigms in chemical safety assessment. Used judiciously, AI has immense potential to advance toxicology into a more predictive, mechanism-based, and evidence-integrated scientific discipline to better safeguard human and environmental wellbeing across diverse populations.


Asunto(s)
Inteligencia Artificial , Seguridad Química , Humanos , Redes Neurales de la Computación , Aprendizaje Automático , Catálisis
14.
Adv Healthc Mater ; : e2302745, 2024 Jan 22.
Artículo en Inglés | MEDLINE | ID: mdl-38252094

RESUMEN

Brain organoids are 3D in vitro culture systems derived from human pluripotent stem cells that self-organize to model features of the (developing) human brain. This review examines the techniques behind organoid generation, their current and potential applications, and future directions for the field. Brain organoids possess complex architecture containing various neural cell types, synapses, and myelination. They have been utilized for toxicology testing, disease modeling, infection studies, personalized medicine, and gene-environment interaction studies. An emerging concept termed Organoid Intelligence (OI) combines organoids with artificial intelligence systems to generate learning and memory, with the goals of modeling cognition and enabling biological computing applications. Brain organoids allow neuroscience studies not previously achievable with traditional techniques, and have the potential to transform disease modeling, drug development, and the understanding of human brain development and disorders. The aspirational vision of OI parallels the origins of artificial intelligence, and efforts are underway to map a roadmap toward its realization. In summary, brain organoids constitute a disruptive technology that is rapidly advancing and gaining traction across multiple disciplines.

15.
Altern Lab Anim ; 52(2): 117-131, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38235727

RESUMEN

The first Stakeholder Network Meeting of the EU Horizon 2020-funded ONTOX project was held on 13-14 March 2023, in Brussels, Belgium. The discussion centred around identifying specific challenges, barriers and drivers in relation to the implementation of non-animal new approach methodologies (NAMs) and probabilistic risk assessment (PRA), in order to help address the issues and rank them according to their associated level of difficulty. ONTOX aims to advance the assessment of chemical risk to humans, without the use of animal testing, by developing non-animal NAMs and PRA in line with 21st century toxicity testing principles. Stakeholder groups (regulatory authorities, companies, academia, non-governmental organisations) were identified and invited to participate in a meeting and a survey, by which their current position in relation to the implementation of NAMs and PRA was ascertained, as well as specific challenges and drivers highlighted. The survey analysis revealed areas of agreement and disagreement among stakeholders on topics such as capacity building, sustainability, regulatory acceptance, validation of adverse outcome pathways, acceptance of artificial intelligence (AI) in risk assessment, and guaranteeing consumer safety. The stakeholder network meeting resulted in the identification of barriers, drivers and specific challenges that need to be addressed. Breakout groups discussed topics such as hazard versus risk assessment, future reliance on AI and machine learning, regulatory requirements for industry and sustainability of the ONTOX Hub platform. The outputs from these discussions provided insights for overcoming barriers and leveraging drivers for implementing NAMs and PRA. It was concluded that there is a continued need for stakeholder engagement, including the organisation of a 'hackathon' to tackle challenges, to ensure the successful implementation of NAMs and PRA in chemical risk assessment.


Asunto(s)
Rutas de Resultados Adversos , Inteligencia Artificial , Animales , Humanos , Pruebas de Toxicidad , Medición de Riesgo , Bélgica
16.
ALTEX ; 41(1): 3-19, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38194639

RESUMEN

Green toxicology is marching chemistry into the 21st century. This emerging framework will transform how chemical safety is evaluated by incorporating evaluation of the hazards, exposures, and risks associated with chemicals into early product development in a way that minimizes adverse impacts on human and environmental health. The goal is to minimize toxic threats across entire supply chains through smarter designs and policies. Traditional animal testing methods are replaced by faster, cutting-edge innovations like organs-on-chips and artificial intelligence predictive models that are also more cost-effective. Core principles of green toxicology include utilizing alternative test methods, applying the precautionary principle, considering lifetime impacts, and emphasizing risk prevention over reaction. This paper provides an overview of these foundational concepts and describes current initiatives and future opportunities to advance the adoption of green toxicology approaches. Chal-lenges and limitations are also discussed. Green shoots are emerging with governments offering carrots like the European Green Deal to nudge industry. Noteworthy, animal rights and environ-mental groups have different ideas about the needs for testing and their consequences for animal use. Green toxicology represents the way forward to support both these societal needs with sufficient throughput and human relevance for hazard information and minimal animal suffering. Green toxi-cology thus sets the stage to synergize human health and ecological values. Overall, the integration of green chemistry and toxicology has potential to profoundly shift how chemical risks are evaluated and managed to achieve safety goals in a more ethical, ecologically-conscious manner.


Green toxicology aims to make chemicals safer by design. It focuses on preventing toxicity issues early during development instead of testing after products are developed. Green toxicology uses modern non-animal methods like computer models and lab tests with human cells to predict if chem­icals could be hazardous. Benefits are faster results, lower costs, and less animal testing. The principles of green toxicology include using alternative tests, applying caution even with uncertain data, con­sidering lifetime impacts across global supply chains, and emphasizing prevention over reaction. The article highlights European and US policy efforts to spur sustainable chemistry innovation which will necessitate greener approaches to assess new materials and drive adoption. Overall, green toxi­cology seeks to integrate safer design concepts so that human and environmental health are valued equally with functionality and profit. This alignment promises safer, ethical products but faces chal­lenges around validating new methods and overcoming institutional resistance to change.


Asunto(s)
Inteligencia Artificial , Seguridad Química , Animales , Humanos , Alternativas a las Pruebas en Animales , Salud Ambiental , Industrias
17.
ALTEX ; 41(2): 282-301, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38043132

RESUMEN

Historical data from control groups in animal toxicity studies is currently mainly used for comparative purposes to assess validity and robustness of study results. Due to the highly controlled environment in which the studies are performed and the homogeneity of the animal collectives it has been proposed to use the historical data for building so-called virtual control groups, which could replace partly or entirely the concurrent control. This would constitute a substantial contribution to the reduction of animal use in safety studies. Before the concept can be implemented, the prerequisites regarding data collection, curation and statistical evaluation together with a validation strategy need to be identified to avoid any impairment of the study outcome and subsequent consequences for human risk assessment. To further assess and develop the concept of virtual control groups the transatlantic think tank for toxicology (t4) sponsored a workshop with stakeholders from the pharmaceutical and chemical industry, academia, FDA, pharmaceutical, contract research organizations (CROs), and non-governmental organizations in Washington, which took place in March 2023. This report summarizes the current efforts of a European initiative to share, collect and curate animal control data in a centralized database and the first approaches to identify optimal matching criteria between virtual controls and the treatment arms of a study as well as first reflections about strategies for a qualification procedure and potential pitfalls of the concept.


Animal safety studies are usually performed with three groups of animals where increasing amounts of the test chemical are given to the animals and one control group where the animals do not receive the test chemical. The design of such studies, the characteristics of the animals, and the measured parameters are often very similar from study to study. Therefore, it has been suggested that measurement data from the control groups could be reused from study to study to lower the total number of animals per study. This could reduce animal use by up to 25% for such standardized studies. A workshop was held to discuss the pros and cons of such a concept and what would have to be done to implement it without threatening the reliability of the study outcome or the resulting human risk assessment.


Asunto(s)
Investigación , Animales , Grupos Control , Preparaciones Farmacéuticas
18.
Front Toxicol ; 5: 1216802, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37908592

RESUMEN

Introduction: The positive identification of xenobiotics and their metabolites in human biosamples is an integral aspect of exposomics research, yet challenges in compound annotation and identification continue to limit the feasibility of comprehensive identification of total chemical exposure. Nonetheless, the adoption of in silico tools such as metabolite prediction software, QSAR-ready structural conversion workflows, and molecular standards databases can aid in identifying novel compounds in untargeted mass spectral investigations, permitting the assessment of a more expansive pool of compounds for human health hazard. This strategy is particularly applicable when it comes to flame retardant chemicals. The population is ubiquitously exposed to flame retardants, and evidence implicates some of these compounds as developmental neurotoxicants, endocrine disruptors, reproductive toxicants, immunotoxicants, and carcinogens. However, many flame retardants are poorly characterized, have not been linked to a definitive mode of toxic action, and are known to share metabolic breakdown products which may themselves harbor toxicity. As U.S. regulatory bodies begin to pursue a subclass- based risk assessment of organohalogen flame retardants, little consideration has been paid to the role of potentially toxic metabolites, or to expanding the identification of parent flame retardants and their metabolic breakdown products in human biosamples to better inform the human health hazards imposed by these compounds. Methods: The purpose of this study is to utilize publicly available in silico tools to 1) characterize the structural and metabolic fates of proposed flame retardant classes, 2) predict first pass metabolites, 3) ascertain whether metabolic products segregate among parent flame retardant classification patterns, and 4) assess the existing coverage in of these compounds in mass spectral database. Results: We found that flame retardant classes as currently defined by the National Academies of Science, Engineering and Medicine (NASEM) are structurally diverse, with highly variable predicted pharmacokinetic properties and metabolic fates among member compounds. The vast majority of flame retardants (96%) and their predicted metabolites (99%) are not present in spectral databases, posing a challenge for identifying these compounds in human biosamples. However, we also demonstrate the utility of publicly available in silico methods in generating a fit for purpose synthetic spectral library for flame retardants and their metabolites that have yet to be identified in human biosamples. Discussion: In conclusion, exposomics studies making use of fit-for-purpose synthetic spectral databases will better resolve internal exposure and windows of vulnerability associated with complex exposures to flame retardant chemicals and perturbed neurodevelopmental, reproductive, and other associated apical human health impacts.

19.
Front Artif Intell ; 6: 1269932, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37915539

RESUMEN

The rapid progress of AI impacts various areas of life, including toxicology, and promises a major role for AI in future risk assessments. Toxicology has shifted from a purely empirical science focused on observing chemical exposure outcomes to a data-rich field ripe for AI integration. AI methods are well-suited to handling and integrating large, diverse data volumes - a key challenge in modern toxicology. Additionally, AI enables Predictive Toxicology, as demonstrated by the automated read-across tool RASAR that achieved 87% balanced accuracy across nine OECD tests and 190,000 chemicals, outperforming animal test reproducibility. AI's ability to handle big data and provide probabilistic outputs facilitates probabilistic risk assessment. Rather than just replicating human skills at larger scales, AI should be viewed as a transformative technology. Despite potential challenges, like model black-boxing and dataset biases, explainable AI (xAI) is emerging to address these issues.

20.
ALTEX ; 40(4): 559-570, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37889187

RESUMEN

Toxicology has undergone a transformation from an observational science to a data-rich discipline ripe for artificial intelligence (AI) integration. The exponential growth in computing power coupled with accumulation of large toxicological datasets has created new opportunities to apply techniques like machine learning and especially deep learning to enhance chemical hazard assessment. This article provides an overview of key developments in AI-enabled toxicology, including early expert systems, statistical learning methods like quantitative structure-activity relationships (QSARs), recent advances with deep neural networks, and emerging trends. The promises and challenges of AI adoption for predictive toxicology, data analysis, risk assessment, and mechanistic research are discussed. Responsible development and application of interpretable and human-centered AI tools through multidisciplinary collaboration can accelerate evidence-based toxicology to better protect human health and the environment. However, AI is not a panacea and must be thoughtfully designed and utilized alongside ongoing efforts to improve primary evidence generation and appraisal.


Asunto(s)
Alternativas a las Pruebas en Animales , Inteligencia Artificial , Humanos , Animales , Aprendizaje Automático
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA