Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Tipo de documento
Intervalo de ano de publicação
1.
J Microsc ; 2023 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-37696268

RESUMO

ModularImageAnalysis (MIA) is an ImageJ plugin providing a code-free graphical environment in which complex automated analysis workflows can be constructed and distributed. The broad range of included modules cover all stages of a typical analysis workflow, from image loading through image processing, object detection, extraction of measurements, measurement-based filtering, visualisation and data exporting. MIA provides out-of-the-box compatibility with many advanced image processing plugins for ImageJ including Bio-Formats, DeepImageJ, MorphoLibJ and TrackMate, allowing these tools and their outputs to be directly incorporated into analysis workflows. By default, modules support spatially calibrated 5D images, meaning measurements can be acquired in both pixel and calibrated units. A hierarchical object relationship model allows for both parent-child (one-to-many) and partner (many-to-many) relationships to be established. These relationships underpin MIA's ability to track objects through time, represent complex spatial relationships (e.g. topological skeletons) and measure object distributions (e.g. count puncta per cell). MIA features dual graphical interfaces: the 'editing view' offers access to the full list of modules and parameters in the workflow, while the simplified 'processing view' can be configured to display only a focused subset of controls. All workflows are batch-enabled by default, with image files within a specified folder being processed automatically and exported to a single spreadsheet. Beyond the included modules, functionality can be extended both internally, through integration with the ImageJ scripting interface, and externally, by developing third-party Java modules that extend the core MIA framework. Here we describe the design and functionality of MIA in the context of a series of real-world example analyses.

2.
Int J Mol Sci ; 21(1)2019 Dec 31.
Artigo em Inglês | MEDLINE | ID: mdl-31906249

RESUMO

Recent improvements in cost-effectiveness of high-throughput technologies has allowed RNA sequencing of total transcriptomes suitable for evaluating the expression and regulation of circRNAs, a relatively novel class of transcript isoforms with suggested roles in transcriptional and post-transcriptional gene expression regulation, as well as their possible use as biomarkers, due to their deregulation in various human diseases. A limited number of integrated workflows exists for prediction, characterization, and differential expression analysis of circRNAs, none of them complying with computational reproducibility requirements. We developed Docker4Circ for the complete analysis of circRNAs from RNA-Seq data. Docker4Circ runs a comprehensive analysis of circRNAs in human and model organisms, including: circRNAs prediction; classification and annotation using six public databases; back-splice sequence reconstruction; internal alternative splicing of circularizing exons; alignment-free circRNAs quantification from RNA-Seq reads; and differential expression analysis. Docker4Circ makes circRNAs analysis easier and more accessible thanks to: (i) its R interface; (ii) encapsulation of computational tasks into docker images; (iii) user-friendly Java GUI Interface availability; and (iv) no need of advanced bash scripting skills for correct use. Furthermore, Docker4Circ ensures a reproducible analysis since all its tasks are embedded into a docker image following the guidelines provided by Reproducible Bioinformatics Project.


Assuntos
Bases de Dados de Ácidos Nucleicos , RNA Circular/genética , RNA-Seq , Software , Animais , Humanos
3.
Front Microbiol ; 14: 1094800, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37065158

RESUMO

Background: Microbiota profiles are strongly influenced by many technical aspects that impact the ability of researchers to compare results. To investigate and identify potential biases introduced by technical variations, we compared several approaches throughout the entire workflow of a microbiome study, from sample collection to sequencing, using commercially available mock communities (from bacterial strains as well as from DNA) and multiple human fecal samples, including a large set of positive controls created as a random mix of several participant samples. Methods: Human fecal material was sampled, and aliquots were used to test two commercially available stabilization solutions (OMNIgene·GUT and Zymo Research) in comparison to samples frozen immediately upon collection. In addition, the methodology for DNA extraction, input of DNA, or the number of PCR cycles were analyzed. Furthermore, to investigate the potential batch effects in DNA extraction, sequencing, and barcoding, we included 139 positive controls. Results: Samples preserved in both the stabilization buffers limited the overgrowth of Enterobacteriaceae when compared to unpreserved samples stored at room temperature (RT). These stabilized samples stored at RT were different from immediately frozen samples, where the relative abundance of Bacteroidota was higher and Actinobacteriota and Firmicutes were lower. As reported previously, the method used for cell disruption was a major contributor to variation in microbiota composition. In addition, a high number of cycles during PCR lead to an increase in contaminants detected in the negative controls. The DNA extraction had a significant impact on the microbial composition, also observed with the use of different Illumina barcodes during library preparation and sequencing, while no batch effect was observed in replicate runs. Conclusion: Our study reaffirms the importance of the mechanical cell disruption method and immediate frozen storage as critical aspects in fecal microbiota studies. A comparison of storage conditions revealed that the bias was limited in RT samples preserved in stabilization systems, and these may be a suitable compromise when logistics are challenging due to the size or location of a study. Moreover, to reduce the effect of contaminants in fecal microbiota profiling studies, we suggest the use of ~125 pg input DNA and 25 PCR cycles as optimal parameters during library preparation.

4.
Protein Cell ; 12(5): 315-330, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-32394199

RESUMO

Advances in high-throughput sequencing (HTS) have fostered rapid developments in the field of microbiome research, and massive microbiome datasets are now being generated. However, the diversity of software tools and the complexity of analysis pipelines make it difficult to access this field. Here, we systematically summarize the advantages and limitations of microbiome methods. Then, we recommend specific pipelines for amplicon and metagenomic analyses, and describe commonly-used software and databases, to help researchers select the appropriate tools. Furthermore, we introduce statistical and visualization methods suitable for microbiome analysis, including alpha- and beta-diversity, taxonomic composition, difference comparisons, correlation, networks, machine learning, evolution, source tracing, and common visualization styles to help researchers make informed choices. Finally, a step-by-step reproducible analysis guide is introduced. We hope this review will allow researchers to carry out data analysis more effectively and to quickly select the appropriate tools in order to efficiently mine the biological significance behind the data.


Assuntos
Algoritmos , Sequenciamento de Nucleotídeos em Larga Escala , Metagenoma , Metagenômica , Microbiota/genética , Software , Biologia Computacional
5.
Methods Mol Biol ; 2051: 345-371, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31552637

RESUMO

In any analytical discipline, data analysis reproducibility is closely interlinked with data quality. In this book chapter focused on mass spectrometry-based proteomics approaches, we introduce how both data analysis reproducibility and data quality can influence each other and how data quality and data analysis designs can be used to increase robustness and improve reproducibility. We first introduce methods and concepts to design and maintain robust data analysis pipelines such that reproducibility can be increased in parallel. The technical aspects related to data analysis reproducibility are challenging, and current ways to increase the overall robustness are multifaceted. Software containerization and cloud infrastructures play an important part.We will also show how quality control (QC) and quality assessment (QA) approaches can be used to spot analytical issues, reduce the experimental variability, and increase confidence in the analytical results of (clinical) proteomics studies, since experimental variability plays a substantial role in analysis reproducibility. Therefore, we give an overview on existing solutions for QC/QA, including different quality metrics, and methods for longitudinal monitoring. The efficient use of both types of approaches undoubtedly provides a way to improve the experimental reliability, reproducibility, and level of consistency in proteomics analytical measurements.


Assuntos
Computação em Nuvem , Análise de Dados , Proteômica/métodos , Controle de Qualidade , Confiabilidade dos Dados , Humanos , Espectrometria de Massas , Reprodutibilidade dos Testes , Software
6.
Protein & Cell ; (12): 315-330, 2021.
Artigo em Inglês | WPRIM | ID: wpr-880878

RESUMO

Advances in high-throughput sequencing (HTS) have fostered rapid developments in the field of microbiome research, and massive microbiome datasets are now being generated. However, the diversity of software tools and the complexity of analysis pipelines make it difficult to access this field. Here, we systematically summarize the advantages and limitations of microbiome methods. Then, we recommend specific pipelines for amplicon and metagenomic analyses, and describe commonly-used software and databases, to help researchers select the appropriate tools. Furthermore, we introduce statistical and visualization methods suitable for microbiome analysis, including alpha- and beta-diversity, taxonomic composition, difference comparisons, correlation, networks, machine learning, evolution, source tracing, and common visualization styles to help researchers make informed choices. Finally, a step-by-step reproducible analysis guide is introduced. We hope this review will allow researchers to carry out data analysis more effectively and to quickly select the appropriate tools in order to efficiently mine the biological significance behind the data.

7.
Cancer Inform ; 13(Suppl 7): 111-22, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-26085786

RESUMO

With the recent results of promising cancer vaccines and immunotherapy1-5, immune monitoring has become increasingly relevant for measuring treatment-induced effects on T cells, and an essential tool for shedding light on the mechanisms responsible for a successful treatment. Flow cytometry is the canonical multi-parameter assay for the fine characterization of single cells in solution, and is ubiquitously used in pre-clinical tumor immunology and in cancer immunotherapy trials. Current state-of-the-art polychromatic flow cytometry involves multi-step, multi-reagent assays followed by sample acquisition on sophisticated instruments capable of capturing up to 20 parameters per cell at a rate of tens of thousands of cells per second. Given the complexity of flow cytometry assays, reproducibility is a major concern, especially for multi-center studies. A promising approach for improving reproducibility is the use of automated analysis borrowing from statistics, machine learning and information visualization21-23, as these methods directly address the subjectivity, operator-dependence, labor-intensive and low fidelity of manual analysis. However, it is quite time-consuming to investigate and test new automated analysis techniques on large data sets without some centralized information management system. For large-scale automated analysis to be practical, the presence of consistent and high-quality data linked to the raw FCS files is indispensable. In particular, the use of machine-readable standard vocabularies to characterize channel metadata is essential when constructing analytic pipelines to avoid errors in processing, analysis and interpretation of results. For automation, this high-quality metadata needs to be programmatically accessible, implying the need for a consistent Application Programming Interface (API). In this manuscript, we propose that upfront time spent normalizing flow cytometry data to conform to carefully designed data models enables automated analysis, potentially saving time in the long run. The ReFlow informatics framework was developed to address these data management challenges.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa