RESUMO
We report a small-footprint cost-effective isothermal rapid DNA amplification system, with integrated microfluidics for automated sample analysis and detection of SARS-CoV-2 in human and environmental samples. Our system measures low-level fluorescent signals in real-time during amplification, while maintaining the desired assay temperature on a low power, portable system footprint. A unique soft microfluidic chip design was implemented to mitigate thermocapillary effects and facilitate optical alignment for automated image capture and signal analysis. The system-on-board prototype, coupled with the LAMP primers designed by BioCoS, was sensitive enough to detect large variations in viral loads of SARS-CoV-2 corresponding to a threshold cycle range of 16 to 39. Furthermore, tested samples consisted of a broad range of viral strains and lineages identified in Canada during 2021-2022. Clinical specimens were collected and tested at the Kingston Health Science Centre using a clinically validated PCR assay, and variants were determined using whole genome sequencing.
RESUMO
Chemoinformatics has developed efficient ways of representing chemical structures for small molecules as simple text strings, simplified molecular-input line-entry system (SMILES) and the IUPAC International Chemical Identifier (InChI), which are machine-readable. In particular, InChIs have been extended to encode formalized representations of mixtures and reactions, and work is ongoing to represent polymers and other macromolecules in this way. The next frontier is encoding the multi-component structures of nanomaterials (NMs) in a machine-readable format to enable linking of datasets for nanoinformatics and regulatory applications. A workshop organized by the H2020 research infrastructure NanoCommons and the nanoinformatics project NanoSolveIT analyzed issues involved in developing an InChI for NMs (NInChI). The layers needed to capture NM structures include but are not limited to: core composition (possibly multi-layered); surface topography; surface coatings or functionalization; doping with other chemicals; and representation of impurities. NM distributions (size, shape, composition, surface properties, etc.), types of chemical linkages connecting surface functionalization and coating molecules to the core, and various crystallographic forms exhibited by NMs also need to be considered. Six case studies were conducted to elucidate requirements for unambiguous description of NMs. The suggested NInChI layers are intended to stimulate further analysis that will lead to the first version of a "nano" extension to the InChI standard.
RESUMO
Preprocessing of transcriptomics data plays a pivotal role in the development of toxicogenomics-driven tools for chemical toxicity assessment. The generation and exploitation of large volumes of molecular profiles, following an appropriate experimental design, allows the employment of toxicogenomics (TGx) approaches for a thorough characterisation of the mechanism of action (MOA) of different compounds. To date, a plethora of data preprocessing methodologies have been suggested. However, in most cases, building the optimal analytical workflow is not straightforward. A careful selection of the right tools must be carried out, since it will affect the downstream analyses and modelling approaches. Transcriptomics data preprocessing spans across multiple steps such as quality check, filtering, normalization, batch effect detection and correction. Currently, there is a lack of standard guidelines for data preprocessing in the TGx field. Defining the optimal tools and procedures to be employed in the transcriptomics data preprocessing will lead to the generation of homogeneous and unbiased data, allowing the development of more reliable, robust and accurate predictive models. In this review, we outline methods for the preprocessing of three main transcriptomic technologies including microarray, bulk RNA-Sequencing (RNA-Seq), and single cell RNA-Sequencing (scRNA-Seq). Moreover, we discuss the most common methods for the identification of differentially expressed genes and to perform a functional enrichment analysis. This review is the second part of a three-article series on Transcriptomics in Toxicogenomics.
RESUMO
The starting point of successful hazard assessment is the generation of unbiased and trustworthy data. Conventional toxicity testing deals with extensive observations of phenotypic endpoints in vivo and complementing in vitro models. The increasing development of novel materials and chemical compounds dictates the need for a better understanding of the molecular changes occurring in exposed biological systems. Transcriptomics enables the exploration of organisms' responses to environmental, chemical, and physical agents by observing the molecular alterations in more detail. Toxicogenomics integrates classical toxicology with omics assays, thus allowing the characterization of the mechanism of action (MOA) of chemical compounds, novel small molecules, and engineered nanomaterials (ENMs). Lack of standardization in data generation and analysis currently hampers the full exploitation of toxicogenomics-based evidence in risk assessment. To fill this gap, TGx methods need to take into account appropriate experimental design and possible pitfalls in the transcriptomic analyses as well as data generation and sharing that adhere to the FAIR (Findable, Accessible, Interoperable, and Reusable) principles. In this review, we summarize the recent advancements in the design and analysis of DNA microarray, RNA sequencing (RNA-Seq), and single-cell RNA-Seq (scRNA-Seq) data. We provide guidelines on exposure time, dose and complex endpoint selection, sample quality considerations and sample randomization. Furthermore, we summarize publicly available data resources and highlight applications of TGx data to understand and predict chemical toxicity potential. Additionally, we discuss the efforts to implement TGx into regulatory decision making to promote alternative methods for risk assessment and to support the 3R (reduction, refinement, and replacement) concept. This review is the first part of a three-article series on Transcriptomics in Toxicogenomics. These initial considerations on Experimental Design, Technologies, Publicly Available Data, Regulatory Aspects, are the starting point for further rigorous and reliable data preprocessing and modeling, described in the second and third part of the review series.
RESUMO
Transcriptomics data are relevant to address a number of challenges in Toxicogenomics (TGx). After careful planning of exposure conditions and data preprocessing, the TGx data can be used in predictive toxicology, where more advanced modelling techniques are applied. The large volume of molecular profiles produced by omics-based technologies allows the development and application of artificial intelligence (AI) methods in TGx. Indeed, the publicly available omics datasets are constantly increasing together with a plethora of different methods that are made available to facilitate their analysis, interpretation and the generation of accurate and stable predictive models. In this review, we present the state-of-the-art of data modelling applied to transcriptomics data in TGx. We show how the benchmark dose (BMD) analysis can be applied to TGx data. We review read across and adverse outcome pathways (AOP) modelling methodologies. We discuss how network-based approaches can be successfully employed to clarify the mechanism of action (MOA) or specific biomarkers of exposure. We also describe the main AI methodologies applied to TGx data to create predictive classification and regression models and we address current challenges. Finally, we present a short description of deep learning (DL) and data integration methodologies applied in these contexts. Modelling of TGx data represents a valuable tool for more accurate chemical safety assessment. This review is the third part of a three-article series on Transcriptomics in Toxicogenomics.
RESUMO
Nanotechnology has enabled the discovery of a multitude of novel materials exhibiting unique physicochemical (PChem) properties compared to their bulk analogues. These properties have led to a rapidly increasing range of commercial applications; this, however, may come at a cost, if an association to long-term health and environmental risks is discovered or even just perceived. Many nanomaterials (NMs) have not yet had their potential adverse biological effects fully assessed, due to costs and time constraints associated with the experimental assessment, frequently involving animals. Here, the available NM libraries are analyzed for their suitability for integration with novel nanoinformatics approaches and for the development of NM specific Integrated Approaches to Testing and Assessment (IATA) for human and environmental risk assessment, all within the NanoSolveIT cloud-platform. These established and well-characterized NM libraries (e.g. NanoMILE, NanoSolutions, NANoREG, NanoFASE, caLIBRAte, NanoTEST and the Nanomaterial Registry (>2000 NMs)) contain physicochemical characterization data as well as data for several relevant biological endpoints, assessed in part using harmonized Organisation for Economic Co-operation and Development (OECD) methods and test guidelines. Integration of such extensive NM information sources with the latest nanoinformatics methods will allow NanoSolveIT to model the relationships between NM structure (morphology), properties and their adverse effects and to predict the effects of other NMs for which less data is available. The project specifically addresses the needs of regulatory agencies and industry to effectively and rapidly evaluate the exposure, NM hazard and risk from nanomaterials and nano-enabled products, enabling implementation of computational 'safe-by-design' approaches to facilitate NM commercialization.
RESUMO
BACKGROUND: B-cell chronic lymphocytic leukemia (CLL) is a common type of adult leukemia. It often follows an indolent course and is preceded by monoclonal B-cell lymphocytosis, an asymptomatic condition, however it is not known what causes subjects with this condition to progress to CLL. Hence the discovery of prediagnostic markers has the potential to improve the identification of subjects likely to develop CLL and may also provide insights into the pathogenesis of the disease of potential clinical relevance. RESULTS: We employed peripheral blood buffy coats of 347 apparently healthy subjects, of whom 28 were diagnosed with CLL 2.0-15.7 years after enrollment, to derive for the first time genome-wide DNA methylation, as well as gene and miRNA expression, profiles associated with the risk of future disease. After adjustment for white blood cell composition, we identified 722 differentially methylated CpG sites and 15 differentially expressed genes (Bonferroni-corrected p < 0.05) as well as 2 miRNAs (FDR < 0.05) which were associated with the risk of future CLL. The majority of these signals have also been observed in clinical CLL, suggesting the presence in prediagnostic blood of CLL-like cells. Future CLL cases who, at enrollment, had a relatively low B-cell fraction (<10%), and were therefore less likely to have been suffering from undiagnosed CLL or a precursor condition, showed profiles involving smaller numbers of the same differential signals with intensities, after adjusting for B-cell content, generally smaller than those observed in the full set of cases. A similar picture was obtained when the differential profiles of cases with time-to-diagnosis above the overall median period of 7.4 years were compared with those with shorted time-to-disease. Differentially methylated genes of major functional significance include numerous genes that encode for transcription factors, especially members of the homeobox family, while differentially expressed genes include, among others, multiple genes related to WNT signaling as well as the miRNAs miR-150-5p and miR-155-5p. CONCLUSIONS: Our findings demonstrate the presence in prediagnostic blood of future CLL patients, more than 10 years before diagnosis, of CLL-like cells which evolve as preclinical disease progresses, and point to early molecular alterations with a pathogenetic potential.
Assuntos
Biomarcadores Tumorais , Perfilação da Expressão Gênica , Leucemia Linfocítica Crônica de Células B , Biomarcadores Tumorais/genética , Metilação de DNA , Regulação Neoplásica da Expressão Gênica , Leucemia Linfocítica Crônica de Células B/sangue , Leucemia Linfocítica Crônica de Células B/diagnóstico , Leucemia Linfocítica Crônica de Células B/genética , MicroRNAs/genética , Prognóstico , Fatores de Tempo , HumanosRESUMO
We recently reported that differential gene expression and DNA methylation profiles in blood leukocytes of apparently healthy smokers predicts with remarkable efficiency diseases and conditions known to be causally associated with smoking, suggesting that blood-based omic profiling of human populations may be useful for linking environmental exposures to potential health effects. Here we report on the sex-specific effects of tobacco smoking on transcriptomic and epigenetic features derived from genome-wide profiling in white blood cells, identifying 26 expression probes and 92 CpG sites, almost all of which are affected only in female smokers. Strikingly, these features relate to numerous genes with a key role in the pathogenesis of cardiovascular disease, especially thrombin signaling, including the thrombin receptors on platelets F2R (coagulation factor II (thrombin) receptor; PAR1) and GP5 (glycoprotein 5), as well as HMOX1 (haem oxygenase 1) and BCL2L1 (BCL2-like 1) which are involved in protection against oxidative stress and apoptosis, respectively. These results are in concordance with epidemiological evidence of higher female susceptibility to tobacco-induced cardiovascular disease and underline the potential of blood-based omic profiling in hazard and risk assessment.
Assuntos
Doenças Cardiovasculares/genética , Epigenômica/métodos , Perfilação da Expressão Gênica/métodos , Poluição por Fumaça de Tabaco/efeitos adversos , Adulto , Idoso , Doenças Cardiovasculares/sangue , Doenças Cardiovasculares/induzido quimicamente , Ilhas de CpG , Metilação de DNA , Epigênese Genética , Feminino , Regulação da Expressão Gênica/efeitos dos fármacos , Redes Reguladoras de Genes/efeitos dos fármacos , Humanos , Masculino , Pessoa de Meia-Idade , Fatores SexuaisRESUMO
The utility of blood-based omic profiles for linking environmental exposures to their potential health effects was evaluated in 649 individuals, drawn from the general population, in relation to tobacco smoking, an exposure with well-characterised health effects. Using disease connectivity analysis, we found that the combination of smoking-modified, genome-wide gene (including miRNA) expression and DNA methylation profiles predicts with remarkable reliability most diseases and conditions independently known to be causally associated with smoking (indicative estimates of sensitivity and positive predictive value 94% and 84%, respectively). Bioinformatics analysis reveals the importance of a small number of smoking-modified, master-regulatory genes and suggest a central role for altered ubiquitination. The smoking-induced gene expression profiles overlap significantly with profiles present in blood cells of patients with lung cancer or coronary heart disease, diseases strongly associated with tobacco smoking. These results provide proof-of-principle support to the suggestion that omic profiling in peripheral blood has the potential of identifying early, disease-related perturbations caused by toxic exposures and may be a useful tool in hazard and risk assessment.