Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 139
Filtrar
1.
ALTEX ; 41(2): 179-201, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38629803

RESUMEN

When The Principles of Humane Experimental Technique was published in 1959, authors William Russell and Rex Burch had a modest goal: to make researchers think about what they were doing in the laboratory - and to do it more humanely. Sixty years later, their groundbreaking book was celebrated for inspiring a revolution in science and launching a new field: The 3Rs of alternatives to animal experimentation. On November 22, 2019, some pioneering and leading scientists and researchers in the field gathered at the Johns Hopkins Bloomberg School of Public Health in Bal-timore for the 60 Years of the 3Rs Symposium: Lessons Learned and the Road Ahead. The event was sponsored by the Johns Hopkins Center for Alternatives to Animal Testing (CAAT), the Foundation for Chemistry Research and Initiatives, the Alternative Research & Development Foundation (ARDF), the American Cleaning Institute (ACI), the International Fragrance Association (IFRA), the Institute for In Vitro Sciences (IIVS), John "Jack" R. Fowle III, and the Society of Toxicology (SoT). Fourteen pres-entations shared the history behind the groundbreaking publication, international efforts to achieve its aims, stumbling blocks to progress, as well as remarkable achievements. The day was a tribute to Russell and Burch, and a testament to what is possible when people from many walks of life - science, government, and industry - work toward a common goal.


William Russell and Rex Burch published their book The Principles of Humane Experimental Technique in 1959. The book encouraged researchers to replace animal experiments where it was possible, to refine experiments with animals in order to reduce their suffering, and to reduce the number of animals that had to be used for experiments to the minimum. Sixty years later, a group of pioneering and leading scientists and researchers in the field gathered to share how the publi­cation came about and how the vision inspired international collaborations and successes on many different levels including new laws. The paper includes an overview of important milestones in the history of alternatives to animal experimentation.


Asunto(s)
Experimentación Animal , Alternativas a las Pruebas en Animales , Animales , Alternativas a las Pruebas en Animales/métodos , Bienestar del Animal , Proyectos de Investigación
2.
Regul Toxicol Pharmacol ; 149: 105614, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38574841

RESUMEN

The United States Environmental Protection Agency (USEPA) uses the lethal dose 50% (LD50) value from in vivo rat acute oral toxicity studies for pesticide product label precautionary statements and environmental risk assessment (RA). The Collaborative Acute Toxicity Modeling Suite (CATMoS) is a quantitative structure-activity relationship (QSAR)-based in silico approach to predict rat acute oral toxicity that has the potential to reduce animal use when registering a new pesticide technical grade active ingredient (TGAI). This analysis compared LD50 values predicted by CATMoS to empirical values from in vivo studies for the TGAIs of 177 conventional pesticides. The accuracy and reliability of the model predictions were assessed relative to the empirical data in terms of USEPA acute oral toxicity categories and discrete LD50 values for each chemical. CATMoS was most reliable at placing pesticide TGAIs in acute toxicity categories III (>500-5000 mg/kg) and IV (>5000 mg/kg), with 88% categorical concordance for 165 chemicals with empirical in vivo LD50 values ≥ 500 mg/kg. When considering an LD50 for RA, CATMoS predictions of 2000 mg/kg and higher were found to agree with empirical values from limit tests (i.e., single, high-dose tests) or definitive results over 2000 mg/kg with few exceptions.


Asunto(s)
Simulación por Computador , Plaguicidas , Relación Estructura-Actividad Cuantitativa , Pruebas de Toxicidad Aguda , United States Environmental Protection Agency , Animales , Medición de Riesgo , Plaguicidas/toxicidad , Dosificación Letal Mediana , Ratas , Administración Oral , Pruebas de Toxicidad Aguda/métodos , Estados Unidos , Reproducibilidad de los Resultados
3.
Arch Toxicol ; 98(5): 1253-1269, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38483583

RESUMEN

Since the 1940s, patch tests in healthy volunteers (Human Predictive Patch Tests, HPPTs) have been used to identify chemicals that cause skin sensitization in humans. Recently, we reported the results of a major curation effort to support the development of OECD Guideline 497 on Defined Approaches (DAs) for skin sensitization (OECD in Guideline No. 497: Defined Approaches on Skin Sensitisation, 2021a. https://doi.org/10.1787/b92879a4-en ). In the course of this work, we compiled and published a database of 2277 HPPT results for 1366 unique test substances (Strickland et al. in Arch Toxicol 97:2825-2837, 2023. https://doi.org/10.1007/s00204-023-03530-3 ). Here we report a detailed analysis of the value of HPPT data for classification of chemicals as skin sensitizers under the United Nations' Globally Harmonized System of Classification and Labelling of Chemicals (GHS). As a result, we propose the dose per skin area (DSA) used for classification by the GHS to be replaced by or complemented with a dose descriptor that may better reflect sensitization incidence [e.g., the DSA causing induction of sensitization in one individual (DSA1+) or the DSA leading to an incidence of induction in 5% of the tested individuals (DSA05)]. We also propose standardized concepts and workflows for assessing individual HPPT results, for integrating multiple HPPT results and for using them in concert with Local Lymph Node Assay (LLNA) data in a weight of evidence (WoE) assessment. Overall, our findings show that HPPT results are often not sufficient for deriving unambiguous classifications on their own. However, where they are, the resulting classifications are reliable and reproducible and can be integrated well with those from other skin sensitization data, such as the LLNA.


Asunto(s)
Dermatitis Alérgica por Contacto , Humanos , Pruebas del Parche , Dermatitis Alérgica por Contacto/etiología , Alérgenos/toxicidad , Piel , Ensayo del Nódulo Linfático Local
4.
Front Toxicol ; 6: 1321857, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38482198

RESUMEN

Introduction: Skin sensitization, which leads to allergic contact dermatitis, is a key toxicological endpoint with high occupational and consumer prevalence. This study optimized several in vitro assays listed in OECD skin sensitization test guidelines for use on a quantitative high-throughput screening (qHTS) platform and performed in silico model predictions to assess the skin sensitization potential of prioritized compounds from the Tox21 10K compound library. Methods: First, we screened the entire Tox21 10K compound library using a qHTS KeratinoSensTM (KS) assay and built a quantitative structure-activity relationship (QSAR) model based on the KS results. From the qHTS KS screening results, we prioritized 288 compounds to cover a wide range of structural chemotypes and tested them in the solid phase extraction-tandem mass spectrometry (SPE-MS/MS) direct peptide reactivity assay (DPRA), IL-8 homogeneous time-resolved fluorescence (HTRF) assay, CD86 and CD54 surface expression in THP1 cells, and predicted in silico sensitization potential using the OECD QSAR Toolbox (v4.5). Results: Interpreting tiered qHTS datasets using a defined approach showed the effectiveness and efficiency of in vitro methods. We selected structural chemotypes to present this diverse chemical collection and to explore previously unidentified structural contributions to sensitization potential. Discussion: Here, we provide a skin sensitization dataset of unprecedented size, along with associated tools, and analysis designed to support chemical assessments.

5.
J Cheminform ; 16(1): 19, 2024 Feb 20.
Artículo en Inglés | MEDLINE | ID: mdl-38378618

RESUMEN

The rapid increase of publicly available chemical structures and associated experimental data presents a valuable opportunity to build robust QSAR models for applications in different fields. However, the common concern is the quality of both the chemical structure information and associated experimental data. This is especially true when those data are collected from multiple sources as chemical substance mappings can contain many duplicate structures and molecular inconsistencies. Such issues can impact the resulting molecular descriptors and their mappings to experimental data and, subsequently, the quality of the derived models in terms of accuracy, repeatability, and reliability. Herein we describe the development of an automated workflow to standardize chemical structures according to a set of standard rules and generate two and/or three-dimensional "QSAR-ready" forms prior to the calculation of molecular descriptors. The workflow was designed in the KNIME workflow environment and consists of three high-level steps. First, a structure encoding is read, and then the resulting in-memory representation is cross-referenced with any existing identifiers for consistency. Finally, the structure is standardized using a series of operations including desalting, stripping of stereochemistry (for two-dimensional structures), standardization of tautomers and nitro groups, valence correction, neutralization when possible, and then removal of duplicates. This workflow was initially developed to support collaborative modeling QSAR projects to ensure consistency of the results from the different participants. It was then updated and generalized for other modeling applications. This included modification of the "QSAR-ready" workflow to generate "MS-ready structures" to support the generation of substance mappings and searches for software applications related to non-targeted analysis mass spectrometry. Both QSAR and MS-ready workflows are freely available in KNIME, via standalone versions on GitHub, and as docker container resources for the scientific community. Scientific contribution: This work pioneers an automated workflow in KNIME, systematically standardizing chemical structures to ensure their readiness for QSAR modeling and broader scientific applications. By addressing data quality concerns through desalting, stereochemistry stripping, and normalization, it optimizes molecular descriptors' accuracy and reliability. The freely available resources in KNIME, GitHub, and docker containers democratize access, benefiting collaborative research and advancing diverse modeling endeavors in chemistry and mass spectrometry.

6.
Environ Health Perspect ; 132(2): 27006, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38349723

RESUMEN

BACKGROUND: Extraction of toxicological end points from primary sources is a central component of systematic reviews and human health risk assessments. To ensure optimal use of these data, consistent language should be used for end point descriptions. However, primary source language describing treatment-related end points can vary greatly, resulting in large labor efforts to manually standardize extractions before data are fit for use. OBJECTIVES: To minimize these labor efforts, we applied an augmented intelligence approach and developed automated tools to support standardization of extracted information via application of preexisting controlled vocabularies. METHODS: We created and applied a harmonized controlled vocabulary crosswalk, consisting of Unified Medical Language System (UMLS) codes, German Federal Institute for Risk Assessment (BfR) DevTox harmonized terms, and The Organization for Economic Co-operation and Development (OECD) end point vocabularies, to roughly 34,000 extractions from prenatal developmental toxicology studies conducted by the National Toxicology Program (NTP) and 6,400 extractions from European Chemicals Agency (ECHA) prenatal developmental toxicology studies, all recorded based on the original study report language. RESULTS: We automatically applied standardized controlled vocabulary terms to 75% of the NTP extracted end points and 57% of the ECHA extracted end points. Of all the standardized extracted end points, about half (51%) required manual review for potential extraneous matches or inaccuracies. Extracted end points that were not mapped to standardized terms tended to be too general or required human logic to find a good match. We estimate that this augmented intelligence approach saved >350 hours of manual effort and yielded valuable resources including a controlled vocabulary crosswalk, organized related terms lists, code for implementing an automated mapping workflow, and a computationally accessible dataset. DISCUSSION: Augmenting manual efforts with automation tools increased the efficiency of producing a findable, accessible, interoperable, and reusable (FAIR) dataset of regulatory guideline studies. This open-source approach can be readily applied to other legacy developmental toxicology datasets, and the code design is customizable for other study types. https://doi.org/10.1289/EHP13215.


Asunto(s)
Artículos Domésticos , Vocabulario Controlado , Humanos , Femenino , Embarazo , Revisiones Sistemáticas como Asunto , Inteligencia , Proyectos de Investigación
7.
Arch Toxicol ; 98(3): 735-754, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38244040

RESUMEN

The rapid progress of AI impacts diverse scientific disciplines, including toxicology, and has the potential to transform chemical safety evaluation. Toxicology has evolved from an empirical science focused on observing apical outcomes of chemical exposure, to a data-rich field ripe for AI integration. The volume, variety and velocity of toxicological data from legacy studies, literature, high-throughput assays, sensor technologies and omics approaches create opportunities but also complexities that AI can help address. In particular, machine learning is well suited to handle and integrate large, heterogeneous datasets that are both structured and unstructured-a key challenge in modern toxicology. AI methods like deep neural networks, large language models, and natural language processing have successfully predicted toxicity endpoints, analyzed high-throughput data, extracted facts from literature, and generated synthetic data. Beyond automating data capture, analysis, and prediction, AI techniques show promise for accelerating quantitative risk assessment by providing probabilistic outputs to capture uncertainties. AI also enables explanation methods to unravel mechanisms and increase trust in modeled predictions. However, issues like model interpretability, data biases, and transparency currently limit regulatory endorsement of AI. Multidisciplinary collaboration is needed to ensure development of interpretable, robust, and human-centered AI systems. Rather than just automating human tasks at scale, transformative AI can catalyze innovation in how evidence is gathered, data are generated, hypotheses are formed and tested, and tasks are performed to usher new paradigms in chemical safety assessment. Used judiciously, AI has immense potential to advance toxicology into a more predictive, mechanism-based, and evidence-integrated scientific discipline to better safeguard human and environmental wellbeing across diverse populations.


Asunto(s)
Inteligencia Artificial , Seguridad Química , Humanos , Redes Neurales de la Computación , Aprendizaje Automático , Catálisis
8.
BMC Bioinformatics ; 25(1): 4, 2024 Jan 02.
Artículo en Inglés | MEDLINE | ID: mdl-38166637

RESUMEN

BACKGROUND: Chemically induced skin sensitization, or allergic contact dermatitis, is a common occupational and public health issue. Regulatory authorities require an assessment of potential to cause skin sensitization for many chemical products. Defined approaches for skin sensitization (DASS) identify potential chemical skin sensitizers by integrating data from multiple non-animal tests based on human cells, molecular targets, and computational model predictions using standardized data interpretation procedures. While several DASS are internationally accepted by regulatory agencies, the data interpretation procedures vary in logical complexity, and manual application can be time-consuming or prone to error. RESULTS: We developed the DASS App, an open-source web application, to facilitate user application of three regulatory testing strategies for skin sensitization assessment: the Two-out-of-Three (2o3), the Integrated Testing Strategy (ITS), and the Key Event 3/1 Sequential Testing Strategy (KE 3/1 STS) without the need for software downloads or computational expertise. The application supports upload and analysis of user-provided data, includes steps to identify inconsistencies and formatting issues, and provides predictions in a downloadable format. CONCLUSION: This open-access web-based implementation of internationally harmonized regulatory guidelines for an important public health endpoint is designed to support broad user uptake and consistent, reproducible application. The DASS App is freely accessible via https://ntp.niehs.nih.gov/go/952311 and all scripts are available on GitHub ( https://github.com/NIEHS/DASS ).


Asunto(s)
Dermatitis Alérgica por Contacto , Aplicaciones Móviles , Animales , Humanos , Alternativas a las Pruebas en Animales/métodos , Piel , Dermatitis Alérgica por Contacto/etiología
9.
Regul Toxicol Pharmacol ; 147: 105564, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38182013

RESUMEN

In toxicology and regulatory testing, the use of animal methods has been both a cornerstone and a subject of intense debate. To continue this discourse a panel and audience representing scientists from various sectors and countries convened at a workshop held during the 12th World Congress on Alternatives and Animal Use in the Life Sciences (WC-12). The ensuing discussion focused on the scientific and ethical considerations surrounding the necessity and responsibility of defending the creation of new animal data in regulatory testing. The primary aim was to foster an open dialogue between the panel members and the audience while encouraging diverse perspectives on the responsibilities and obligations of various stakeholders (including industry, regulatory bodies, technology developers, research scientists, and animal welfare NGOs) in defending the development and subsequent utilization of new animal data. This workshop summary report captures the key elements from this critical dialogue and collective introspection. It describes the intersection of scientific progress and ethical responsibility as all sectors seek to accelerate the pace of 21st century predictive toxicology and new approach methodologies (NAMs) for the protection of human health and the environment.


Asunto(s)
Bienestar del Animal , Informe de Investigación , Animales , Humanos , Industrias , Medición de Riesgo , Alternativas a las Pruebas en Animales/métodos
10.
Cutan Ocul Toxicol ; 43(1): 58-68, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37905558

RESUMEN

Many sectors have seen complete replacement of the in vivo rabbit eye test with reproducible and relevant in vitro and ex vivo methods to assess the eye corrosion/irritation potential of chemicals. However, the in vivo rabbit eye test remains the standard test used for agrochemical formulations in some countries. Therefore, two defined approaches (DAs) for assessing conventional agrochemical formulations were developed, using the EpiOcularTM Eye Irritation Test (EIT) [Organisation for Economic Co-operation and Development (OECD) test guideline (TG) 492] and the Bovine Corneal Opacity and Permeability (OECD TG 437; BCOP) test with histopathology. Presented here are the results from testing 29 agrochemical formulations, which were evaluated against the United States Environmental Protection Agency's (EPA) pesticide classification system, and assessed using orthogonal validation, rather than direct concordance analysis with the historical in vivo rabbit eye data. Scientific confidence was established by evaluating the methods and testing results using an established framework that considers fitness for purpose, human biological relevance, technical characterisation, data integrity and transparency, and independent review. The in vitro and ex vivo methods used in the DAs were demonstrated to be as or more fit for purpose, reliable and relevant than the in vivo rabbit eye test. Overall, there is high scientific confidence in the use of these DAs for assessing the eye corrosion/irritation potential of agrochemical formulations.


Asunto(s)
Opacidad de la Córnea , Epitelio Corneal , Humanos , Animales , Bovinos , Conejos , Ojo , Epitelio Corneal/patología , Agroquímicos/toxicidad , Irritantes/toxicidad , Opacidad de la Córnea/inducido químicamente , Opacidad de la Córnea/patología , Permeabilidad , Alternativas a las Pruebas en Animales
11.
Regul Toxicol Pharmacol ; 144: 105493, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37717614

RESUMEN

Like many other consumer and occupational products, pesticide formulations may contain active ingredients or co-formulants which have the potential to cause skin sensitisation. Currently, there is little evidence they do, but that could just reflect lack of clinical investigation. Consequently, it is necessary to carry out a safety evaluation process, quantifying risks so that they can be properly managed. A workshop on this topic in 2022 discussed how best to undertake quantitative risk assessment (QRA) for pesticide products, including learning from the experience of industries, notably cosmetics, that already undertake such a process routinely. It also addressed ways to remedy the matter of clinical investigation, even if only to demonstrate the absence of a problem. Workshop participants concluded that QRA for skin sensitisers in pesticide formulations was possible, but required careful justification of any safety factors applied, as well as improvements to the estimation of skin exposure. The need for regulations to stay abreast of the science was also noted. Ultimately, the success of any risk assessment/management for skin sensitisers must be judged by the clinical picture. Accordingly, the workshop participants encouraged the development of more active skin health monitoring amongst groups most exposed to the products.


Asunto(s)
Cosméticos , Dermatitis Alérgica por Contacto , Plaguicidas , Humanos , Dermatitis Alérgica por Contacto/etiología , Plaguicidas/toxicidad , Piel , Medición de Riesgo , Cosméticos/toxicidad
12.
J Infect Dis ; 228(Suppl 5): S337-S354, 2023 10 03.
Artículo en Inglés | MEDLINE | ID: mdl-37669225

RESUMEN

The National Center for Advancing Translational Sciences (NCATS) Assay Guidance Manual (AGM) Workshop on 3D Tissue Models for Antiviral Drug Development, held virtually on 7-8 June 2022, provided comprehensive coverage of critical concepts intended to help scientists establish robust, reproducible, and scalable 3D tissue models to study viruses with pandemic potential. This workshop was organized by NCATS, the National Institute of Allergy and Infectious Diseases, and the Bill and Melinda Gates Foundation. During the workshop, scientific experts from academia, industry, and government provided an overview of 3D tissue models' utility and limitations, use of existing 3D tissue models for antiviral drug development, practical advice, best practices, and case studies about the application of available 3D tissue models to infectious disease modeling. This report includes a summary of each workshop session as well as a discussion of perspectives and challenges related to the use of 3D tissues in antiviral drug discovery.


Asunto(s)
Antivirales , Descubrimiento de Drogas , Antivirales/farmacología , Antivirales/uso terapéutico , Bioensayo
13.
Arch Toxicol ; 97(11): 2825-2837, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37615678

RESUMEN

Critical to the evaluation of non-animal tests are reference data with which to assess their relevance. Animal data are typically used because they are generally standardized and available. However, when regulatory agencies aim to protect human health, human reference data provide the benefit of not having to account for possible interspecies variability. To support the evaluation of non-animal approaches for skin sensitization assessment, we collected data from 2277 human predictive patch tests (HPPTs), i.e., human repeat insult patch tests and human maximization tests, for skin sensitization from 1555 publications. We recorded protocol elements and positive or negative outcomes, developed a scoring system to evaluate each test for reliability, and calculated traditional and non-traditional dose metrics. We also traced each test result back to its original report to remove duplicates. The resulting database, which contains information for 1366 unique substances, was characterized for physicochemical properties, chemical structure categories, and protein binding mechanisms. This database is publicly available on the National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods website and in the Integrated Chemical Environment to serve as a resource for additional evaluation of alternative methods and development of new approach methodologies for skin sensitization assessments.


Asunto(s)
Benchmarking , Piel , Humanos , Pruebas del Parche , Reproducibilidad de los Resultados , Bases de Datos Factuales
14.
Crit Rev Toxicol ; 53(7): 385-411, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37646804

RESUMEN

Chemical regulatory authorities around the world require systemic toxicity data from acute exposures via the oral, dermal, and inhalation routes for human health risk assessment. To identify opportunities for regulatory uses of non-animal replacements for these tests, we reviewed acute systemic toxicity testing requirements for jurisdictions that participate in the International Cooperation on Alternative Test Methods (ICATM): Brazil, Canada, China, the European Union, Japan, South Korea, Taiwan, and the USA. The chemical sectors included in our review of each jurisdiction were cosmetics, consumer products, industrial chemicals, pharmaceuticals, medical devices, and pesticides. We found acute systemic toxicity data were most often required for hazard assessment, classification, and labeling, and to a lesser extent quantitative risk assessment. Where animal methods were required, animal reduction methods were typically recommended. For many jurisdictions and chemical sectors, non-animal alternatives are not accepted, but several jurisdictions provide guidance to support the use of test waivers to reduce animal use for specific applications. An understanding of international regulatory requirements for acute systemic toxicity testing will inform ICATM's strategy for the development, acceptance, and implementation of non-animal alternatives to assess the health hazards and risks associated with acute toxicity.

15.
Environ Int ; 178: 108082, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37422975

RESUMEN

The predominantly animal-centric approach of chemical safety assessment has increasingly come under pressure. Society is questioning overall performance, sustainability, continued relevance for human health risk assessment and ethics of this system, demanding a change of paradigm. At the same time, the scientific toolbox used for risk assessment is continuously enriched by the development of "New Approach Methodologies" (NAMs). While this term does not define the age or the state of readiness of the innovation, it covers a wide range of methods, including quantitative structure-activity relationship (QSAR) predictions, high-throughput screening (HTS) bioassays, omics applications, cell cultures, organoids, microphysiological systems (MPS), machine learning models and artificial intelligence (AI). In addition to promising faster and more efficient toxicity testing, NAMs have the potential to fundamentally transform today's regulatory work by allowing more human-relevant decision-making in terms of both hazard and exposure assessment. Yet, several obstacles hamper a broader application of NAMs in current regulatory risk assessment. Constraints in addressing repeated-dose toxicity, with particular reference to the chronic toxicity, and hesitance from relevant stakeholders, are major challenges for the implementation of NAMs in a broader context. Moreover, issues regarding predictivity, reproducibility and quantification need to be addressed and regulatory and legislative frameworks need to be adapted to NAMs. The conceptual perspective presented here has its focus on hazard assessment and is grounded on the main findings and conclusions from a symposium and workshop held in Berlin in November 2021. It intends to provide further insights into how NAMs can be gradually integrated into chemical risk assessment aimed at protection of human health, until eventually the current paradigm is replaced by an animal-free "Next Generation Risk Assessment" (NGRA).


Asunto(s)
Inteligencia Artificial , Pruebas de Toxicidad , Humanos , Reproducibilidad de los Resultados , Pruebas de Toxicidad/métodos , Medición de Riesgo/métodos
16.
Toxicol In Vitro ; 91: 105630, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37315744

RESUMEN

Skin permeation is a primary consideration in the safety assessment of cosmetic ingredients, topical drugs, and human users handling veterinary medicinal products. While excised human skin (EHS) remains the 'gold standard' for in vitro permeation testing (IVPT) studies, unreliable supply and high cost motivate the search for alternative skin barrier models. In this study, a standardized dermal absorption testing protocol was developed to evaluate the suitability of alternative skin barrier models to predict skin absorption in humans. Under this protocol, side-by-side assessments of a commercially available reconstructed human epidermis (RhE) model (EpiDerm-200-X, MatTek), a synthetic barrier membrane (Strat-M, Sigma-Aldrich), and EHS were performed. The skin barrier models were mounted on Franz diffusion cells and the permeation of caffeine, salicylic acid, and testosterone was quantified. Transepidermal water loss (TEWL) and histology of the biological models were also compared. EpiDerm-200-X exhibited native human epidermis-like morphology, including a characteristic stratum corneum, but had an elevated TEWL as compared to EHS. The mean 6 h cumulative permeation of a finite dose (6 nmol/cm2) of caffeine and testosterone was highest in EpiDerm-200-X, followed by EHS and Strat-M. Salicylic acid permeated most in EHS, followed by EpiDerm-200-X and Strat-M. Overall, evaluating novel alternative skin barrier models in the manner outlined herein has the potential to reduce the time from basic science discovery to regulatory impact.


Asunto(s)
Cafeína , Absorción Cutánea , Humanos , Piel/metabolismo , Epidermis/metabolismo , Ácido Salicílico/metabolismo , Testosterona/metabolismo , Agua/metabolismo
18.
Nucleic Acids Res ; 51(W1): W78-W82, 2023 07 05.
Artículo en Inglés | MEDLINE | ID: mdl-37194699

RESUMEN

Access to computationally based visualization tools to navigate chemical space has become more important due to the increasing size and diversity of publicly accessible databases, associated compendiums of high-throughput screening (HTS) results, and other descriptor and effects data. However, application of these techniques requires advanced programming skills that are beyond the capabilities of many stakeholders. Here we report the development of the second version of the ChemMaps.com webserver (https://sandbox.ntp.niehs.nih.gov/chemmaps/) focused on environmental chemical space. The chemical space of ChemMaps.com v2.0, released in 2022, now includes approximately one million environmental chemicals from the EPA Distributed Structure-Searchable Toxicity (DSSTox) inventory. ChemMaps.com v2.0 incorporates mapping of HTS assay data from the U.S. federal Tox21 research collaboration program, which includes results from around 2000 assays tested on up to 10 000 chemicals. As a case example, we showcased chemical space navigation for Perfluorooctanoic Acid (PFOA), part of the Per- and polyfluoroalkyl substances (PFAS) chemical family, which are of significant concern for their potential effects on human health and the environment.


Asunto(s)
Bases de Datos de Compuestos Químicos , Ensayos Analíticos de Alto Rendimiento , Programas Informáticos , Ambiente
20.
Regul Toxicol Pharmacol ; 138: 105333, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36608925

RESUMEN

Meaningful and accurate reference data are crucial for the validation of New Approach Methodologies (NAMs) in toxicology. For skin sensitization, multiple reference datasets are available including human patch test data, guinea pig data and data from the mouse local lymph node assay (LLNA). When assessed against the LLNA, a reduced sensitivity has been reported for in vitro and in chemico assays for lipophilic chemicals with a LogP ≥3.5, resulting in reliability restrictions within the h-CLAT OECD test guideline. Here we address the question of whether LLNA data are an appropriate reference for chemicals in this physicochemical range. Analysis of LLNA vs human reference data indicates that the false-discovery rate of the LLNA is significantly higher for chemicals with LogP ≥3.5. We present a mechanistic hypothesis whereby irritation caused by testing lipophilic chemicals at high test doses leads to unspecific cell proliferation. The accompanying analysis indicates that for lipophilic chemicals with negative calls in in vitro and in chemico assays, resorting to the LLNA is not necessarily a better option. These results indicate that the validation of NAMs in this particular LogP range should be based on a more holistic evaluation of the reference data and not solely upon LLNA data.


Asunto(s)
Dermatitis Alérgica por Contacto , Ensayo del Nódulo Linfático Local , Animales , Ratones , Humanos , Cobayas , Dermatitis Alérgica por Contacto/etiología , Dermatitis Alérgica por Contacto/patología , Reproducibilidad de los Resultados , Piel , Pruebas del Parche , Alérgenos/toxicidad , Ganglios Linfáticos/patología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...