RESUMO
In our earlier work (Golden et al., 2021), we showed 70-80% accuracies for several skin sensitization computational tools using human data. Here, we expanded the data set using the NICEATM human skin sensitization database to create a final data set of 1355 discrete chemicals (largely negative, â¼70%). Using this expanded data set, we analyzed model performance and evaluated mispredictions using Toxtree (v 3.1.0), OECD QSAR Toolbox (v 4.5), VEGA's (1.2.0 BETA) CAESAR (v 2.1.7), and a k-nearest-neighbor (kNN) classification approach. We show that the accuracy on this data set was lower than previous estimates, with balanced accuracies being 63% and 65% for Toxtree and OECD QSAR Toolbox, respectively, 46% for VEGA, and 59% for a kNN approach, with the lower accuracy likely due to the higher percentage of nonsensitizing chemicals. Two hundred eighty seven chemicals were mispredicted by both Toxtree and OECD QSAR Toolbox, which was approximately 20% of the entire data set, and 84% of these were false positives. The absence or presence of metabolic simulation in OECD QSAR Toolbox made no overall difference. While Toxtree is known for overpredicting, 60% of the chemicals in the data set had no alert for skin sensitization, and a substantial number of these chemicals were in fact sensitizers, pointing to sensitization mechanisms not recognized by Toxtree. Interestingly, we observed that chemicals with more than one Toxtree alert were more likely to be nonsensitizers. Finally, a kNN approach tended to mispredict different chemicals than either OECD QSAR Toolbox or Toxtree, suggesting that there was additional information to be garnered from a kNN approach. Overall, the results demonstrate that while there is merit in structural alerts as well as QSAR or read-across approaches (perhaps even more so in their combination), additional improvement will require a more nuanced understanding of mechanisms of skin sensitization.
Assuntos
Relação Quantitativa Estrutura-Atividade , Pele , Humanos , Pele/metabolismo , Simulação por ComputadorRESUMO
Chemical respiratory sensitization is an immunological process that manifests clinically mostly as occupational asthma and is responsible for 1 in 6 cases of adult asthma, although this may be an underestimate of the prevalence, as it is under-diagnosed. Occupational asthma results in unemployment for roughly one-third of those affected due to severe health issues. Despite its high prevalence, chemical respiratory sensitization is difficult to predict, as there are currently no validated models and the mechanisms are not entirely understood, creating a significant challenge for regulatory bodies and industry alike. The Adverse Outcome Pathway (AOP) for respiratory sensitization is currently incomplete. However, some key events have been identified, and there is overlap with the comparatively well-characterized AOP for dermal sensitization. Because of this, and the fact that dermal sensitization is often assessed by in vivo, in chemico, or in silico methods, regulatory bodies are defaulting to the dermal sensitization status of chemicals as a proxy for respiratory sensitization status when evaluating chemical safety. We identified a data set of known human respiratory sensitizers, which we used to investigate the accuracy of a structural alert model, Toxtree, designed for skin sensitization and the Centre for Occupational and Environmental Health (COEH)'s model, a model developed specifically for occupational asthma. While both models had a reasonable level of accuracy, the COEH model achieved the highest balanced accuracy at 76%; when the models agreed, the overall accuracy was 87%. There were important differences between the models: Toxtree had superior performance for some structural alerts and some categories of well-characterized skin sensitizers, while the COEH model had high accuracy in identifying sensitizers that lacked identified skin sensitization reactivity domains. Overall, both models achieved respectable accuracy. However, neither model addresses potency, which, along with data quality, remains a hurdle, and the field must prioritize these issues to move forward.
Assuntos
Alérgenos/efeitos adversos , Simulação por Computador , Hipersensibilidade Respiratória/induzido quimicamente , Alérgenos/química , Humanos , Modelos Logísticos , Estrutura MolecularRESUMO
Due to regulatory bans and voluntary substitutions, halogenated polybrominated diphenyl ether (PBDE) flame retardants (FR) are increasingly substituted by mainly organophosphorus FR (OPFR). Leveraging a 3D rat primary neural organotypic in vitro model (rat brainsphere), we compare developmental neurotoxic effects of BDE-47-the most abundant PBDE congener-with four OPFR (isopropylated phenyl phosphate-IPP, triphenyl phosphate-TPHP, isodecyl diphenyl phosphate-IDDP, and tricresyl phosphate (also known as trimethyl phenyl phosphate)-TMPP). Employing mass spectroscopy-based metabolomics and transcriptomics, we observe at similar human-relevant non-cytotoxic concentrations (0.1-5 µM) stronger developmental neurotoxic effects by OPFR. This includes toxicity to neurons in the low µM range; all FR decrease the neurotransmitters glutamate and GABA (except BDE-47 and TPHP). Furthermore, n-acetyl aspartate (NAA), considered a neurologic diagnostic molecule, was decreased by all OPFR. At similar concentrations, the FR currently in use decreased plasma membrane dopamine active transporter expression, while BDE-47 did not. Several findings suggest astrogliosis induced by the OPFR, but not BDE-47. At the 5 µM concentrations, the OPFR more than BDE-47 interfered with myelination. An increase of cytokine gene and receptor expressions suggests that exposure to OPFR may induce an inflammatory response. Pathway/category overrepresentation shows disruption in 1) transmission of action potentials, cell-cell signaling, synaptic transmission, receptor signaling, (2) immune response, inflammation, defense response, (3) cell cycle and (4) lipids metabolism and transportation. Taken together, this appears to be a case of regretful substitution with substances not less developmentally neurotoxic in a primary rat 3D model.
Assuntos
Encéfalo/efeitos dos fármacos , Retardadores de Chama/toxicidade , Neurônios/efeitos dos fármacos , Síndromes Neurotóxicas/etiologia , Organofosfatos/toxicidade , Animais , Encéfalo/embriologia , Encéfalo/metabolismo , Células Cultivadas , Feminino , Perfilação da Expressão Gênica , Idade Gestacional , Éteres Difenil Halogenados/toxicidade , Metaboloma/efeitos dos fármacos , Metabolômica , Neurônios/metabolismo , Neurônios/patologia , Síndromes Neurotóxicas/metabolismo , Síndromes Neurotóxicas/patologia , Gravidez , Ratos Sprague-Dawley , Esferoides Celulares , Transcriptoma/efeitos dos fármacos , Tritolil Fosfatos/toxicidadeRESUMO
To date, most in vitro toxicity testing has focused on acute effects of compounds at high concentrations. This testing strategy does not reflect real-life exposures, which might contribute to long-term disease outcome. We used a 3D-human dopaminergic in vitro LUHMES cell line model to determine whether effects of short-term rotenone exposure (100 nM, 24 h) are permanent or reversible. A decrease in complex I activity, ATP, mitochondrial diameter, and neurite outgrowth were observed acutely. After compound removal, complex I activity was still inhibited; however, ATP levels were increased, cells were electrically active and aggregates restored neurite outgrowth integrity and mitochondrial morphology. We identified significant transcriptomic changes after 24 h which were not present 7 days after wash-out. Our results suggest that testing short-term exposures in vitro may capture many acute effects which cells can overcome, missing adaptive processes, and long-term mechanisms. In addition, to study cellular resilience, cells were re-exposed to rotenone after wash-out and recovery period. Pre-exposed cells maintained higher metabolic activity than controls and presented a different expression pattern in genes previously shown to be altered by rotenone. NEF2L2, ATF4, and EAAC1 were downregulated upon single hit on day 14, but unchanged in pre-exposed aggregates. DAT and CASP3 were only altered after re-exposure to rotenone, while TYMS and MLF1IP were downregulated in both single-exposed and pre-exposed aggregates. In summary, our study shows that a human cell-based 3D model can be used to assess cellular adaptation, resilience, and long-term mechanisms relevant to neurodegenerative research.
Assuntos
Técnicas de Cultura de Células/métodos , Neurônios Dopaminérgicos/efeitos dos fármacos , Regulação da Expressão Gênica/efeitos dos fármacos , Rotenona/toxicidade , Testes de Toxicidade/métodos , Trifosfato de Adenosina/metabolismo , Neurônios Dopaminérgicos/fisiologia , Humanos , Inseticidas/toxicidade , Mitocôndrias/efeitos dos fármacos , Mitocôndrias/metabolismo , Crescimento Neuronal/efeitos dos fármacosRESUMO
In the context of the Human Toxome project, mass spectroscopy-based metabolomics characterization of estrogen-stimulated MCF-7 cells was studied in order to support the untargeted deduction of pathways of toxicity. A targeted and untargeted approach using overrepresentation analysis (ORA), quantitative enrichment analysis (QEA) and pathway analysis (PA) and a metabolite network approach were compared. Any untargeted approach necessarily has some noise in the data owing to artifacts, outliers and misidentified metabolites. Depending on the chemical analytical choices (sample extraction, chromatography, instrument and settings, etc.), only a partial representation of all metabolites will be achieved, biased by both the analytical methods and the database used to identify the metabolites. Here, we show on the one hand that using a data analysis approach based exclusively on pathway annotations has the potential to miss much that is of interest and, in the case of misidentified metabolites, can produce perturbed pathways that are statistically significant yet uninformative for the biological sample at hand. On the other hand, a targeted approach, by narrowing its focus and minimizing (but not eliminating) misidentifications, renders the likelihood of a spurious pathway much smaller, but the limited number of metabolites also makes statistical significance harder to achieve. To avoid an analysis dependent on pathways, we built a de novo network using all metabolites that were different at 24 h with and without estrogen with a p value <0.01 (53) in the STITCH database, which links metabolites based on known reactions in the main metabolic network pathways but also based on experimental evidence and text mining. The resulting network contained a "connected component" of 43 metabolites and helped identify non-endogenous metabolites as well as pathways not visible by annotation-based approaches. Moreover, the most highly connected metabolites (energy metabolites such as pyruvate and alpha-ketoglutarate, as well as amino acids) showed only a modest change between proliferation with and without estrogen. Here, we demonstrate that estrogen has subtle but potentially phenotypically important alterations in the acyl-carnitine fatty acids, acetyl-putrescine and succinoadenosine, in addition to likely subtle changes in key energy metabolites that, however, could not be verified consistently given the technical limitations of this approach. Finally, we show that a network-based approach combined with text mining identifies pathways that would otherwise neither be considered statistically significant on their own nor be identified via ORA, QEA, or PA.
Assuntos
Metabolismo Energético/efeitos dos fármacos , Estradiol/farmacologia , Estrogênios/farmacologia , Metaboloma/efeitos dos fármacos , Metabolômica/métodos , Modelos Biológicos , Metabolismo Secundário/efeitos dos fármacos , Toxicologia/métodos , Cromatografia Líquida de Alta Pressão , Biologia Computacional , Mineração de Dados , Bases de Dados Factuais , Disruptores Endócrinos/farmacologia , Humanos , Células MCF-7 , Reprodutibilidade dos Testes , Espectrometria de Massas por Ionização por ElectrosprayRESUMO
The twenty-first century vision for toxicology involves a transition away from high-dose animal studies to in vitro and computational models (NRC in Toxicity testing in the 21st century: a vision and a strategy, The National Academies Press, Washington, DC, 2007). This transition requires mapping pathways of toxicity by understanding how in vitro systems respond to chemical perturbation. Uncovering transcription factors/signaling networks responsible for gene expression patterns is essential for defining pathways of toxicity, and ultimately, for determining the chemical modes of action through which a toxicant acts. Traditionally, transcription factor identification is achieved via chromatin immunoprecipitation studies and summarized by calculating which transcription factors are statistically associated with up- and downregulated genes. These lists are commonly determined via statistical or fold-change cutoffs, a procedure that is sensitive to statistical power and may not be as useful for determining transcription factor associations. To move away from an arbitrary statistical or fold-change-based cutoff, we developed, in the context of the Mapping the Human Toxome project, an enrichment paradigm called information-dependent enrichment analysis (IDEA) to guide identification of the transcription factor network. We used a test case of activation in MCF-7 cells by 17ß estradiol (E2). Using this new approach, we established a time course for transcriptional and functional responses to E2. ERα and ERß were associated with short-term transcriptional changes in response to E2. Sustained exposure led to recruitment of additional transcription factors and alteration of cell cycle machinery. TFAP2C and SOX2 were the transcription factors most highly correlated with dose. E2F7, E2F1, and Foxm1, which are involved in cell proliferation, were enriched only at 24 h. IDEA should be useful for identifying candidate pathways of toxicity. IDEA outperforms gene set enrichment analysis (GSEA) and provides similar results to weighted gene correlation network analysis, a platform that helps to identify genes not annotated to pathways.
Assuntos
Estradiol/toxicidade , Receptor alfa de Estrogênio/efeitos dos fármacos , Receptor beta de Estrogênio/efeitos dos fármacos , Testes de Toxicidade/métodos , Animais , Proliferação de Células/efeitos dos fármacos , Estradiol/administração & dosagem , Receptor alfa de Estrogênio/metabolismo , Receptor beta de Estrogênio/metabolismo , Regulação da Expressão Gênica/efeitos dos fármacos , Humanos , Células MCF-7 , Fatores de Transcrição SOXB1/genética , Transdução de Sinais/efeitos dos fármacos , Fatores de Tempo , Fator de Transcrição AP-2/genética , Fatores de Transcrição/genéticaRESUMO
Deriving a Pathway of Toxicity from transcriptomic data remains a challenging task. We explore the use of weighted gene correlation network analysis (WGCNA) to extract an initial network from a small microarray study of MPTP toxicity in mice. Five modules were statistically significant; each module was analyzed for gene signatures in the Chemical and Genetic Perturbation subset of the Molecular Signatures Database as well as for over-represented transcription factor binding sites and WGCNA clustered probes by function and captured pathways relevant to neurodegenerative disorders. The resulting network was analyzed for transcription factor candidates, which were narrowed down via text-mining for relevance to the disease model, and then combined with the large-scale interaction FANTOM4 database to generate a genetic regulatory network. Modules were enriched for transcription factors relevant to Parkinson's disease. Transcription factors significantly improved the number of genes that could be connected in a given component. For each module, the transcription factor that had, by far, the highest number of interactions was SP1, and it also had substantial experimental evidence of interactions. This analysis both captures much of the known biology of MPTP toxicity and suggests several candidates for further study. Furthermore, the analysis strongly suggests that SP1 plays a central role in coordinating the cellular response to MPTP toxicity.
Assuntos
Intoxicação por MPTP/fisiopatologia , Fator de Transcrição Sp1/fisiologia , 1-Metil-4-Fenil-1,2,3,6-Tetra-Hidropiridina/efeitos adversos , Animais , Regulação da Expressão Gênica/efeitos dos fármacos , Masculino , Camundongos , Camundongos Endogâmicos C57BL , Análise em Microsséries , Fatores de Transcrição/fisiologiaRESUMO
Supervised learning methods promise to improve integrated testing strategies (ITS), but must be adjusted to handle high dimensionality and dose-response data. ITS approaches are currently fueled by the increasing mechanistic understanding of adverse outcome pathways (AOP) and the development of tests reflecting these mechanisms. Simple approaches to combine skin sensitization data sets, such as weight of evidence, fail due to problems in information redundancy and high dimensionality. The problem is further amplified when potency information (dose/response) of hazards would be estimated. Skin sensitization currently serves as the foster child for AOP and ITS development, as legislative pressures combined with a very good mechanistic understanding of contact dermatitis have led to test development and relatively large high-quality data sets. We curated such a data set and combined a recursive variable selection algorithm to evaluate the information available through in silico, in chemico and in vitro assays. Chemical similarity alone could not cluster chemicals' potency, and in vitro models consistently ranked high in recursive feature elimination. This allows reducing the number of tests included in an ITS. Next, we analyzed with a hidden Markov model that takes advantage of an intrinsic inter-relationship among the local lymph node assay classes, i.e. the monotonous connection between local lymph node assay and dose. The dose-informed random forest/hidden Markov model was superior to the dose-naive random forest model on all data sets. Although balanced accuracy improvement may seem small, this obscures the actual improvement in misclassifications as the dose-informed hidden Markov model strongly reduced " false-negatives" (i.e. extreme sensitizers as non-sensitizer) on all data sets.
Assuntos
Alternativas aos Testes com Animais/métodos , Aprendizado de Máquina , Relação Quantitativa Estrutura-Atividade , Testes Cutâneos/métodos , Testes de Toxicidade/métodos , Algoritmos , Bases de Dados Factuais , Relação Dose-Resposta a Droga , Humanos , Hidrocarbonetos Bromados/toxicidade , Ensaio Local de Linfonodo , Cadeias de Markov , Medição de Risco , Pele/efeitos dos fármacos , Pele/metabolismoRESUMO
The validation of new approach methods (NAMs) in toxicology faces significant challenges, including the integration of diverse data, selection of appropriate reference chemicals, and lengthy, resource-intensive consensus processes. This article proposes an artificial intelligence (AI)-based approach, termed e-validation, to optimize and accelerate the NAM validation process. E-vali-dation employs advanced machine learning and simulation techniques to systematically design validation studies, select informative reference chemicals, integrate existing data, and provide tailored training. The approach aims to shorten current decade-long validation timelines, using fewer resources while enhancing rigor. Key components include the smart selection of reference chemicals using clustering algorithms, simulation of validation studies, mechanistic validation powered by AI, and AI-enhanced training for NAM education and implementation. A centralized dashboard interface could integrate these components, streamlining workflows and providing real-time decision support. The potential impacts of e-validation are extensive, promising to accel-erate biomedical research, enhance chemical safety assessment, reduce animal testing, and drive regulatory and commercial innovation. While the integration of AI and machine learning offers sig-nificant advantages, challenges related to data quality, complexity of implementation, scalability, and ethical considerations must be addressed. Real-world validation and pilot studies are crucial to demonstrate the practical benefits and feasibility of e-validation. This transformative approach has the potential to revolutionize toxicological science and regulatory practices, ushering in a new era of predictive, personalized, and preventive health sciences.
Validating new methods to replace traditional animal testing for chemicals can be slow and costly, often taking up to ten years. This article introduces e-validation, an artificial intelligence (AI)- powered approach designed to speed up and improve this process. By using advanced computer techniques, e-validation selects the best chemicals for testing, designs efficient studies, and integrates existing data. This approach would cut validation time and use fewer resources. E-validation includes a smart system for choosing test chemicals, virtual simulations to predict study outcomes, and AI tools to understand the biological effects of chemicals. It also provides training in these new methods. E-validation could accelerate medical research, improve chemical safety, reduce the need for animal testing, and help create safer products faster. While promising, this new approach will need real-world testing to prove its benefits and address potential challenges.
Assuntos
Alternativas aos Testes com Animais , Inteligência Artificial , Alternativas aos Testes com Animais/métodos , Humanos , Animais , Toxicologia/métodos , Aprendizado de Máquina , Reprodutibilidade dos TestesRESUMO
Green toxicology is marching chemistry into the 21st century. This emerging framework will transform how chemical safety is evaluated by incorporating evaluation of the hazards, exposures, and risks associated with chemicals into early product development in a way that minimizes adverse impacts on human and environmental health. The goal is to minimize toxic threats across entire supply chains through smarter designs and policies. Traditional animal testing methods are replaced by faster, cutting-edge innovations like organs-on-chips and artificial intelligence predictive models that are also more cost-effective. Core principles of green toxicology include utilizing alternative test methods, applying the precautionary principle, considering lifetime impacts, and emphasizing risk prevention over reaction. This paper provides an overview of these foundational concepts and describes current initiatives and future opportunities to advance the adoption of green toxicology approaches. Chal-lenges and limitations are also discussed. Green shoots are emerging with governments offering carrots like the European Green Deal to nudge industry. Noteworthy, animal rights and environ-mental groups have different ideas about the needs for testing and their consequences for animal use. Green toxicology represents the way forward to support both these societal needs with sufficient throughput and human relevance for hazard information and minimal animal suffering. Green toxi-cology thus sets the stage to synergize human health and ecological values. Overall, the integration of green chemistry and toxicology has potential to profoundly shift how chemical risks are evaluated and managed to achieve safety goals in a more ethical, ecologically-conscious manner.
Green toxicology aims to make chemicals safer by design. It focuses on preventing toxicity issues early during development instead of testing after products are developed. Green toxicology uses modern non-animal methods like computer models and lab tests with human cells to predict if chemicals could be hazardous. Benefits are faster results, lower costs, and less animal testing. The principles of green toxicology include using alternative tests, applying caution even with uncertain data, considering lifetime impacts across global supply chains, and emphasizing prevention over reaction. The article highlights European and US policy efforts to spur sustainable chemistry innovation which will necessitate greener approaches to assess new materials and drive adoption. Overall, green toxicology seeks to integrate safer design concepts so that human and environmental health are valued equally with functionality and profit. This alignment promises safer, ethical products but faces challenges around validating new methods and overcoming institutional resistance to change.
Assuntos
Inteligência Artificial , Segurança Química , Animais , Humanos , Alternativas aos Testes com Animais , Saúde Ambiental , IndústriasRESUMO
PURPOSE: Biallelic germline pathogenic variants of the base excision repair (BER) pathway gene MUTYH predispose to colorectal cancer (CRC) and other cancers. The possible association of heterozygous variants with broader cancer susceptibility remains uncertain. This study investigated the prevalence and consequences of pathogenic MUTYH variants and MUTYH loss of heterozygosity (LOH) in a large pan-cancer analysis. MATERIALS AND METHODS: Data from 354,366 solid tumor biopsies that were sequenced as part of routine clinical care were analyzed using a validated algorithm to distinguish germline from somatic MUTYH variants. RESULTS: Biallelic germline pathogenic MUTYH variants were identified in 119 tissue biopsies. Most were CRCs and showed increased tumor mutational burden (TMB) and a mutational signature consistent with defective BER (COSMIC Signature SBS18). Germline heterozygous pathogenic variants were identified in 5,991 biopsies and their prevalence was modestly elevated in some cancer types. About 12% of these cancers (738 samples: including adrenal gland cancers, pancreatic islet cell tumors, nonglioma CNS tumors, GI stromal tumors, and thyroid cancers) showed somatic LOH for MUTYH, higher rates of chromosome 1p loss (where MUTYH is located), elevated genomic LOH, and higher COSMIC SBS18 signature scores, consistent with BER deficiency. CONCLUSION: This analysis of MUTYH alterations in a large set of solid cancers suggests that in addition to the established role of biallelic pathogenic MUTYH variants in cancer predisposition, a broader range of cancers may possibly arise in MUTYH heterozygotes via a mechanism involving somatic LOH at the MUTYH locus and defective BER. However, the effect is modest and requires confirmation in additional studies before being clinically actionable.
Assuntos
DNA Glicosilases , Reparo por Excisão , Neoplasias , Humanos , Predisposição Genética para Doença/genética , Mutação em Linhagem Germinativa/genética , Mutação/genética , Neoplasias/epidemiologia , Neoplasias/genética , DNA Glicosilases/genéticaRESUMO
The Human Exposome Project aims to revolutionize our understanding of how environmental exposures affect human health by systematically cataloging and analyzing the myriad exposures individuals encounter throughout their lives. This initiative draws a parallel with the Human Genome Project, expanding the focus from genetic factors to the dynamic and complex nature of environ-mental interactions. The project leverages advanced methodologies such as omics technologies, biomonitoring, microphysiological systems (MPS), and artificial intelligence (AI), forming the foun-dation of exposome intelligence (EI) to integrate and interpret vast datasets. Key objectives include identifying exposure-disease links, prioritizing hazardous chemicals, enhancing public health and regulatory policies, and reducing reliance on animal testing. The Implementation Moonshot Project for Alternative Chemical Testing (IMPACT), spearheaded by the Center for Alternatives to Animal Testing (CAAT), is a new element in this endeavor, driving the creation of a public-private part-nership toward a Human Exposome Project with a stakeholder forum in 2025. Establishing robust infrastructure, fostering interdisciplinary collaborations, and ensuring quality assurance through sys-tematic reviews and evidence-based frameworks are crucial for the project's success. The expected outcomes promise transformative advancements in precision public health, disease prevention, and a more ethical approach to toxicology. This paper outlines the strategic imperatives, challenges, and opportunities that lie ahead, calling on stakeholders to support and participate in this landmark initiative for a healthier, more sustainable future.
This paper outlines a proposal for a "Human Exposome Project" to comprehensively study how environmental exposures affect human health throughout our lives. The exposome refers to all the environmental factors we are exposed to, from chemicals to diet to stress. The project aims to use advanced technologies like artificial intelligence, lab-grown mini-organs, and detailed biological measurements to map how different exposures impact our health. This could help identify causes of diseases and guide better prevention strategies. Key goals include finding links between specific exposures and health problems, determining which chemicals are most concerning, improving public health policies, and reducing animal testing. The project requires collaboration between researchers, government agencies, companies, and others. While ambitious, this effort could revolutionize our understanding of environmental health risks. The potential benefits for improving health and preventing disease make this an important endeavor to a precise and comprehensive approach to public health and disease prevention.
Assuntos
Alternativas aos Testes com Animais , Exposição Ambiental , Expossoma , Humanos , Animais , Substâncias Perigosas/toxicidade , Saúde Pública , Monitoramento Ambiental/métodosRESUMO
Researchers in biomedical research, public health and the life sciences often spend weeks or months discovering, accessing, curating, and integrating data from disparate sources, significantly delaying the onset of actual analysis and innovation. Instead of countless developers creating redundant and inconsistent data pipelines, BioBricks.ai offers a centralized data repository and a suite of developer-friendly tools to simplify access to scientific data. Currently, BioBricks.ai delivers over ninety biological and chemical datasets. It provides a package manager-like system for installing and managing dependencies on data sources. Each 'brick' is a Data Version Control git repository that supports an updateable pipeline for extraction, transformation, and loading data into the BioBricks.ai backend at https://biobricks.ai. Use cases include accelerating data science workflows and facilitating the creation of novel data assets by integrating multiple datasets into unified, harmonized resources. In conclusion, BioBricks.ai offers an opportunity to accelerate access and use of public data through a single open platform.
RESUMO
Both because of the shortcomings of existing risk assessment methodologies, as well as newly available tools to predict hazard and risk with machine learning approaches, there has been an emerging emphasis on probabilistic risk assessment. Increasingly sophisticated AI models can be applied to a plethora of exposure and hazard data to obtain not only predictions for particular endpoints but also to estimate the uncertainty of the risk assessment outcome. This provides the basis for a shift from deterministic to more probabilistic approaches but comes at the cost of an increased complexity of the process as it requires more resources and human expertise. There are still challenges to overcome before a probabilistic paradigm is fully embraced by regulators. Based on an earlier white paper (Maertens et al., 2022), a workshop discussed the prospects, challenges and path forward for implementing such AI-based probabilistic hazard assessment. Moving forward, we will see the transition from categorized into probabilistic and dose-dependent hazard outcomes, the application of internal thresholds of toxicological concern for data-poor substances, the acknowledgement of user-friendly open-source software, a rise in the expertise of toxicologists required to understand and interpret artificial intelligence models, and the honest communication of uncertainty in risk assessment to the public.
Probabilistic risk assessment, initially from engineering, is applied in toxicology to understand chemical-related hazards and their consequences. In toxicology, uncertainties abound unclear molecular events, varied proposed outcomes, and population-level assessments for issues like neurodevelopmental disorders. Establishing links between chemical exposures and diseases, especially rare events like birth defects, often demands extensive studies. Existing methods struggle with subtle effects or those affecting specific groups. Future risk assessments must address developmental disease origins, presenting challenges beyond current capabilities. The intricate nature of many toxicological processes, lack of consensus on mechanisms and outcomes, and the need for nuanced population-level assessments highlight the complexities in understanding and quantifying risks associated with chemical exposures in the field of toxicology.
Assuntos
Inteligência Artificial , Toxicologia , Animais , Humanos , Alternativas aos Testes com Animais , Medição de Risco/métodos , Incerteza , Toxicologia/métodosRESUMO
Both tissue-resident macrophages and monocytes recruited from the bone marrow that transform into tissue-resident cells play critical roles in mediating homeostasis as well as in the pathology of inflammatory diseases. Inorganic arsenic (iAs) is the most common drinking water contaminant worldwide and represents a major public health concern. Several diseases that macrophages have implicated involvement in are caused by iAs exposure, including cardiovascular disease, cancer, and increased risk of infectious disease. Therefore, understanding the effects of iAs exposure on macrophages can help us better grasp the full range of arsenic immunotoxicity and better design therapeutic targets for iAs-induced diseases particularly in exposed populations. In this study, we analyzed the transcriptome of low dose iAs-exposed male and female murine bone marrow-derived macrophages (BMDMs) with either M0, M1, or M2 stimulation. We identified differentially expressed genes by iAs in a sex- and stimulation-dependent manner and used bioinformatics tools to predict protein-protein interactions, transcriptional regulatory networks, and associated biological processes. Overall, our data suggest that M1-stimulated, especially female-derived, BMDMs are most susceptible to iAs exposure. Most notably, we observed significant downregulation of major proinflammatory transcription factors, like IRF8, and its downstream targets, as well as genes encoding proteins involved in pattern recognition and antigen presentation, such as TLR7, TLR8, and H2-D1, potentially providing causal insight regarding arsenic's role in perturbing immune responses to infectious diseases. We also observed significant downregulation of genes involved in processes crucial to coordinating a proinflammatory response including leukocyte migration, differentiation, and cytokine and chemokine production and response. Finally, we discovered that 24 X-linked genes were dysregulated in iAs-exposed female stimulation groups compared to only 3 across the iAs-exposed male stimulation groups. These findings elucidate the potential mechanisms underlying the sex-differential iAs-associated immune-related disease risk.
RESUMO
Introduction: The positive identification of xenobiotics and their metabolites in human biosamples is an integral aspect of exposomics research, yet challenges in compound annotation and identification continue to limit the feasibility of comprehensive identification of total chemical exposure. Nonetheless, the adoption of in silico tools such as metabolite prediction software, QSAR-ready structural conversion workflows, and molecular standards databases can aid in identifying novel compounds in untargeted mass spectral investigations, permitting the assessment of a more expansive pool of compounds for human health hazard. This strategy is particularly applicable when it comes to flame retardant chemicals. The population is ubiquitously exposed to flame retardants, and evidence implicates some of these compounds as developmental neurotoxicants, endocrine disruptors, reproductive toxicants, immunotoxicants, and carcinogens. However, many flame retardants are poorly characterized, have not been linked to a definitive mode of toxic action, and are known to share metabolic breakdown products which may themselves harbor toxicity. As U.S. regulatory bodies begin to pursue a subclass- based risk assessment of organohalogen flame retardants, little consideration has been paid to the role of potentially toxic metabolites, or to expanding the identification of parent flame retardants and their metabolic breakdown products in human biosamples to better inform the human health hazards imposed by these compounds. Methods: The purpose of this study is to utilize publicly available in silico tools to 1) characterize the structural and metabolic fates of proposed flame retardant classes, 2) predict first pass metabolites, 3) ascertain whether metabolic products segregate among parent flame retardant classification patterns, and 4) assess the existing coverage in of these compounds in mass spectral database. Results: We found that flame retardant classes as currently defined by the National Academies of Science, Engineering and Medicine (NASEM) are structurally diverse, with highly variable predicted pharmacokinetic properties and metabolic fates among member compounds. The vast majority of flame retardants (96%) and their predicted metabolites (99%) are not present in spectral databases, posing a challenge for identifying these compounds in human biosamples. However, we also demonstrate the utility of publicly available in silico methods in generating a fit for purpose synthetic spectral library for flame retardants and their metabolites that have yet to be identified in human biosamples. Discussion: In conclusion, exposomics studies making use of fit-for-purpose synthetic spectral databases will better resolve internal exposure and windows of vulnerability associated with complex exposures to flame retardant chemicals and perturbed neurodevelopmental, reproductive, and other associated apical human health impacts.
RESUMO
The brain is arguably the most powerful computation system known. It is extremely efficient in processing large amounts of information and can discern signals from noise, adapt, and filter faulty information all while running on only 20 watts of power. The human brain's processing efficiency, progressive learning, and plasticity are unmatched by any computer system. Recent advances in stem cell technology have elevated the field of cell culture to higher levels of complexity, such as the development of three-dimensional (3D) brain organoids that recapitulate human brain functionality better than traditional monolayer cell systems. Organoid Intelligence (OI) aims to harness the innate biological capabilities of brain organoids for biocomputing and synthetic intelligence by interfacing them with computer technology. With the latest strides in stem cell technology, bioengineering, and machine learning, we can explore the ability of brain organoids to compute, and store given information (input), execute a task (output), and study how this affects the structural and functional connections in the organoids themselves. Furthermore, understanding how learning generates and changes patterns of connectivity in organoids can shed light on the early stages of cognition in the human brain. Investigating and understanding these concepts is an enormous, multidisciplinary endeavor that necessitates the engagement of both the scientific community and the public. Thus, on Feb 22-24 of 2022, the Johns Hopkins University held the first Organoid Intelligence Workshop to form an OI Community and to lay out the groundwork for the establishment of OI as a new scientific discipline. The potential of OI to revolutionize computing, neurological research, and drug development was discussed, along with a vision and roadmap for its development over the coming decade.
RESUMO
Safety sciences must cope with uncertainty of models and results as well as information gaps. Acknowledging this uncer-tainty necessitates embracing probabilities and accepting the remaining risk. Every toxicological tool delivers only probable results. Traditionally, this is taken into account by using uncertainty / assessment factors and worst-case / precautionary approaches and thresholds. Probabilistic methods and Bayesian approaches seek to characterize these uncertainties and promise to support better risk assessment and, thereby, improve risk management decisions. Actual assessments of uncertainty can be more realistic than worst-case scenarios and may allow less conservative safety margins. Most importantly, as soon as we agree on uncertainty, this defines room for improvement and allows a transition from traditional to new approach methods as an engineering exercise. The objective nature of these mathematical tools allows to assign each methodology its fair place in evidence integration, whether in the context of risk assessment, sys-tematic reviews, or in the definition of an integrated testing strategy (ITS) / defined approach (DA) / integrated approach to testing and assessment (IATA). This article gives an overview of methods for probabilistic risk assessment and their application for exposure assessment, physiologically-based kinetic modelling, probability of hazard assessment (based on quantitative and read-across based structure-activity relationships, and mechanistic alerts from in vitro studies), indi-vidual susceptibility assessment, and evidence integration. Additional aspects are opportunities for uncertainty analysis of adverse outcome pathways and their relation to thresholds of toxicological concern. In conclusion, probabilistic risk assessment will be key for constructing a new toxicology paradigm - probably!
Assuntos
Toxicologia , Teorema de Bayes , Medição de Risco , IncertezaRESUMO
Green chemistry seeks to design less hazardous chemicals, but many of the efforts to replace chemicals have resulted in so-called "Regrettable Substitutions", when a chemical with an unknown or unforeseen hazard is used to replace a chemical identified as problematic. Here, we discuss the literature on regrettable substitution and focus on an oft-mentioned case, Bisphenol A, which was replaced with Bisphenol S-and the lessons that can be learned from this history. In particular, we focus on how Green Toxicology can offer a way to make better substitutions.
RESUMO
Failure to adequately characterize cell lines, and understand the differences between in vitro and in vivo biology, can have serious consequences on the translatability of in vitro scientific studies to human clinical trials. This project focuses on the Michigan Cancer Foundation-7 (MCF-7) cells, a human breast adenocarcinoma cell line that is commonly used for in vitro cancer research, with over 42,000 publications in PubMed. In this study, we explore the key similarities and differences in gene expression networks of MCF-7 cell lines compared to human breast cancer tissues. We used two MCF-7 data sets, one data set collected by ARCHS4 including 1032 samples and one data set from Gene Expression Omnibus GSE50705 with 88 estradiol-treated MCF-7 samples. The human breast invasive ductal carcinoma (BRCA) data set came from The Cancer Genome Atlas, including 1212 breast tissue samples. Weighted Gene Correlation Network Analysis (WGCNA) and functional annotations of the data showed that MCF-7 cells and human breast tissues have only minimal similarity in biological processes, although some fundamental functions, such as cell cycle, are conserved. Scaled connectivity-a network topology metric-also showed drastic differences in the behavior of genes between MCF-7 and BRCA data sets. Finally, we used canSAR to compute ligand-based druggability scores of genes in the data sets, and our results suggested that using MCF-7 to study breast cancer may lead to missing important gene targets. Our comparison of the networks of MCF-7 and human breast cancer highlights the nuances of using MCF-7 to study human breast cancer and can contribute to better experimental design and result interpretation of study involving this cell line.