Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 150
Filtrar
Mais filtros

Tipo de documento
Intervalo de ano de publicação
1.
Neuroimage ; 301: 120874, 2024 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-39341472

RESUMO

Combining Non-Invasive Brain Stimulation (NIBS) techniques with the recording of brain electrophysiological activity is an increasingly widespread approach in neuroscience. Particularly successful has been the simultaneous combination of Transcranial Magnetic Stimulation (TMS) and Electroencephalography (EEG). Unfortunately, the strong magnetic pulse required to effectively interact with brain activity inevitably induces artifacts in the concurrent EEG acquisition. Therefore, a careful but aggressive pre-processing is required to efficiently remove artifacts. Unfortunately, as already reported in the literature, different preprocessing approaches can introduce variability in the results. Here we aim at characterizing the three main TMS-EEG preprocessing pipelines currently available, namely ARTIST (Wu et al., 2018), TESA (Rogasch et al., 2017) and SOUND/SSP-SIR (Mutanen et al., 2018, 2016), providing an insight to researchers who need to choose between different approaches. Differently from previous works, we tested the pipelines using a synthetic TMS-EEG signal with a known ground-truth (the artifacts-free to-be-reconstructed signal). In this way, it was possible to assess the reliability of each pipeline precisely and quantitatively, providing a more robust reference for future research. In summary, we found that all pipelines performed well, but with differences in terms of the spatio-temporal precision of the ground-truth reconstruction. Crucially, the three pipelines impacted differently on the inter-trial variability, with ARTIST introducing inter-trial variability not already intrinsic to the ground-truth signal.


Assuntos
Artefatos , Eletroencefalografia , Processamento de Sinais Assistido por Computador , Estimulação Magnética Transcraniana , Estimulação Magnética Transcraniana/métodos , Estimulação Magnética Transcraniana/normas , Humanos , Eletroencefalografia/métodos , Eletroencefalografia/normas , Encéfalo/fisiologia , Reprodutibilidade dos Testes
2.
Mod Pathol ; 37(4): 100439, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38286221

RESUMO

This work puts forth and demonstrates the utility of a reporting framework for collecting and evaluating annotations of medical images used for training and testing artificial intelligence (AI) models in assisting detection and diagnosis. AI has unique reporting requirements, as shown by the AI extensions to the Consolidated Standards of Reporting Trials (CONSORT) and Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) checklists and the proposed AI extensions to the Standards for Reporting Diagnostic Accuracy (STARD) and Transparent Reporting of a Multivariable Prediction model for Individual Prognosis or Diagnosis (TRIPOD) checklists. AI for detection and/or diagnostic image analysis requires complete, reproducible, and transparent reporting of the annotations and metadata used in training and testing data sets. In an earlier work by other researchers, an annotation workflow and quality checklist for computational pathology annotations were proposed. In this manuscript, we operationalize this workflow into an evaluable quality checklist that applies to any reader-interpreted medical images, and we demonstrate its use for an annotation effort in digital pathology. We refer to this quality framework as the Collection and Evaluation of Annotations for Reproducible Reporting of Artificial Intelligence (CLEARR-AI).


Assuntos
Inteligência Artificial , Lista de Checagem , Humanos , Prognóstico , Processamento de Imagem Assistida por Computador , Projetos de Pesquisa
3.
Magn Reson Med ; 91(4): 1464-1477, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38044680

RESUMO

PURPOSE: The reproducibility of scientific reports is crucial to advancing human knowledge. This paper is a summary of our experience in replicating a balanced SSFP half-radial dual-echo imaging technique (bSTAR) using open-source frameworks as a response to the 2023 ISMRM "repeat it with me" Challenge. METHODS: We replicated the bSTAR technique for thoracic imaging at 0.55T. The bSTAR pulse sequence is implemented in Pulseq, a vendor neutral open-source rapid sequence prototyping environment. Image reconstruction is performed with the open-source Berkeley Advanced Reconstruction Toolbox (BART). The replication of bSTAR, termed open-source bSTAR, is tested by replicating several figures from the published literature. Original bSTAR, using the pulse sequence and image reconstruction developed by the original authors, and open-source bSTAR, with pulse sequence and image reconstruction developed in this work, were performed in healthy volunteers. RESULTS: Both echo images obtained from open-source bSTAR contain no visible artifacts and show identical spatial resolution and image quality to those in the published literature. A direct head-to-head comparison between open-source bSTAR and original bSTAR on a healthy volunteer indicates that open-source bSTAR provides adequate SNR, spatial resolution, level of artifacts, and conspicuity of pulmonary vessels comparable to original bSTAR. CONCLUSION: We have successfully replicated bSTAR lung imaging at 0.55T using two open-source frameworks. Full replication of a research method solely relying on information on a research paper is unfortunately rare in research, but our success gives greater confidence that a research methodology can be indeed replicated as described.


Assuntos
Artefatos , Imageamento por Ressonância Magnética , Humanos , Reprodutibilidade dos Testes , Imageamento por Ressonância Magnética/métodos
4.
J Proteome Res ; 22(9): 2775-2784, 2023 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-37530557

RESUMO

Missing values are a notable challenge when analyzing mass spectrometry-based proteomics data. While the field is still actively debating the best practices, the challenge increased with the emergence of mass spectrometry-based single-cell proteomics and the dramatic increase in missing values. A popular approach to deal with missing values is to perform imputation. Imputation has several drawbacks for which alternatives exist, but currently, imputation is still a practical solution widely adopted in single-cell proteomics data analysis. This perspective discusses the advantages and drawbacks of imputation. We also highlight 5 main challenges linked to missing value management in single-cell proteomics. Future developments should aim to solve these challenges, whether it is through imputation or data modeling. The perspective concludes with recommendations for reporting missing values, for reporting methods that deal with missing values, and for proper encoding of missing values.


Assuntos
Proteômica , Análise de Célula Única , Proteômica/métodos , Espectrometria de Massas/métodos , Algoritmos
5.
Brain Topogr ; 36(2): 172-191, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36575327

RESUMO

How functional magnetic resonance imaging (fMRI) data are analyzed depends on the researcher and the toolbox used. It is not uncommon that the processing pipeline is rewritten for each new dataset. Consequently, code transparency, quality control and objective analysis pipelines are important for improving reproducibility in neuroimaging studies. Toolboxes, such as Nipype and fMRIPrep, have documented the need for and interest in automated pre-processing analysis pipelines. Recent developments in data-driven models combined with high resolution neuroimaging dataset have strengthened the need not only for a standardized preprocessing workflow, but also for a reliable and comparable statistical pipeline. Here, we introduce fMRIflows: a consortium of fully automatic neuroimaging pipelines for fMRI analysis, which performs standard preprocessing, as well as 1st- and 2nd-level univariate and multivariate analyses. In addition to the standardized pre-processing pipelines, fMRIflows provides flexible temporal and spatial filtering to account for datasets with increasingly high temporal resolution and to help appropriately prepare data for advanced machine learning analyses, improving signal decoding accuracy and reliability. This paper first describes fMRIflows' structure and functionality, then explains its infrastructure and access, and lastly validates the toolbox by comparing it to other neuroimaging processing pipelines such as fMRIPrep, FSL and SPM. This validation was performed on three datasets with varying temporal sampling and acquisition parameters to prove its flexibility and robustness. fMRIflows is a fully automatic fMRI processing pipeline which uniquely offers univariate and multivariate single-subject and group analyses as well as pre-processing.


Assuntos
Imageamento por Ressonância Magnética , Software , Humanos , Imageamento por Ressonância Magnética/métodos , Reprodutibilidade dos Testes , Processamento de Imagem Assistida por Computador/métodos , Neuroimagem , Encéfalo/diagnóstico por imagem
6.
Clin Trials ; 20(1): 89-92, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36169229

RESUMO

BACKGROUND: In clinical trial development, it is a critical step to submit applications, amendments, supplements, and reports on medicinal products to regulatory agencies. The electronic common technical document is the standard format to enable worldwide regulatory submission. There is a growing trend of using R for clinical trial analysis and reporting as part of regulatory submissions, where R functions, analysis scripts, analysis results, and all proprietary code dependencies are required to be included. One unmet and significant gap is the lack of tools, guidance, and publicly available examples to prepare submission R programs following the electronic common technical document specification. METHODS: We introduce a simple and sufficient R package, pkglite, to convert analysis scripts and associated proprietary dependency R packages into a compact, text-based file, which makes the submission document self-contained, easy to restore, transfer, review, and submit following the electronic common technical document specification and regulatory guidelines (e.g. the study data technical conformance guide from the US Food and Drug Administration). The pkglite R package is published on Comprehensive R Archive Network and developed on GitHub. RESULTS: As a tool, pkglite can pack and unpack multiple R packages with their dependencies to facilitate the reproduction and make it an off-the-shelf tool for both sponsors and reviewers. As a grammar, pkglite provides an explicit trace of the packing scope using the concept of file specifications. As a standard, pkglite offers an open file format to represent and exchange R packages as a text file. We use a mock-up example to demonstrate the workflow of using pkglite to prepare submission programs following the electronic common technical document specification. CONCLUSION: pkglite and the proposed workflow enable the sponsor to submit well-organized R scripts following the electronic common technical document specification. The workflow has been used in the first publicly available R-based submission to the US Food and Drug Administration by the R Consortium R submission working group (https://www.r-consortium.org/blog/2022/03/16/update-successful-r-based-test-package-submitted-to-fda).


Assuntos
Eletrônica , Estados Unidos , Humanos , United States Food and Drug Administration
7.
BMC Med Inform Decis Mak ; 23(1): 8, 2023 01 16.
Artigo em Inglês | MEDLINE | ID: mdl-36647111

RESUMO

BACKGROUND: The CVD-COVID-UK consortium was formed to understand the relationship between COVID-19 and cardiovascular diseases through analyses of harmonised electronic health records (EHRs) across the four UK nations. Beyond COVID-19, data harmonisation and common approaches enable analysis within and across independent Trusted Research Environments. Here we describe the reproducible harmonisation method developed using large-scale EHRs in Wales to accommodate the fast and efficient implementation of cross-nation analysis in England and Wales as part of the CVD-COVID-UK programme. We characterise current challenges and share lessons learnt. METHODS: Serving the scope and scalability of multiple study protocols, we used linked, anonymised individual-level EHR, demographic and administrative data held within the SAIL Databank for the population of Wales. The harmonisation method was implemented as a four-layer reproducible process, starting from raw data in the first layer. Then each of the layers two to four is framed by, but not limited to, the characterised challenges and lessons learnt. We achieved curated data as part of our second layer, followed by extracting phenotyped data in the third layer. We captured any project-specific requirements in the fourth layer. RESULTS: Using the implemented four-layer harmonisation method, we retrieved approximately 100 health-related variables for the 3.2 million individuals in Wales, which are harmonised with corresponding variables for > 56 million individuals in England. We processed 13 data sources into the first layer of our harmonisation method: five of these are updated daily or weekly, and the rest at various frequencies providing sufficient data flow updates for frequent capturing of up-to-date demographic, administrative and clinical information. CONCLUSIONS: We implemented an efficient, transparent, scalable, and reproducible harmonisation method that enables multi-nation collaborative research. With a current focus on COVID-19 and its relationship with cardiovascular outcomes, the harmonised data has supported a wide range of research activities across the UK.


Assuntos
COVID-19 , Registros Eletrônicos de Saúde , Humanos , COVID-19/epidemiologia , País de Gales/epidemiologia , Inglaterra
8.
J Appl Biomech ; 39(6): 421-431, 2023 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-37793655

RESUMO

A muscle's architecture, defined as the geometric arrangement of its fibers with respect to its mechanical line of action, impacts its abilities to produce force and shorten or lengthen under load. Ultrasound and other noninvasive imaging methods have contributed significantly to our understanding of these structure-function relationships. The goal of this work was to develop a MATLAB toolbox for tracking and mathematically representing muscle architecture at the fascicle scale, based on brightness-mode ultrasound imaging data. The MuscleUS_Toolbox allows user-performed segmentation of a region of interest and automated modeling of local fascicle orientation; calculation of streamlines between aponeuroses of origin and insertion; and quantification of fascicle length, pennation angle, and curvature. A method is described for optimizing the fascicle orientation modeling process, and the capabilities of the toolbox for quantifying and visualizing fascicle architecture are illustrated in the human tibialis anterior muscle. The toolbox is freely available.


Assuntos
Músculo Esquelético , Humanos , Músculo Esquelético/diagnóstico por imagem , Músculo Esquelético/fisiologia , Ultrassonografia
9.
Cytometry A ; 101(4): 351-360, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34967113

RESUMO

Mislabeling samples or data with the wrong participant information can affect study integrity and lead investigators to draw inaccurate conclusions. Quality control to prevent these types of errors is commonly embedded into the analysis of genomic datasets, but a similar identification strategy is not standard for cytometric data. Here, we present a method for detecting sample identification errors in cytometric data using expression of human leukocyte antigen (HLA) class I alleles. We measured HLA-A*02 and HLA-B*07 expression in three longitudinal samples from 41 participants using a 33-marker CyTOF panel designed to identify major immune cell types. 3/123 samples (2.4%) showed HLA allele expression that did not match their longitudinal pairs. Furthermore, these same three samples' cytometric signature did not match qPCR HLA class I allele data, suggesting that they were accurately identified as mismatches. We conclude that this technique is useful for detecting sample-labeling errors in cytometric analyses of longitudinal data. This technique could also be used in conjunction with another method, like GWAS or PCR, to detect errors in cross-sectional data. We suggest widespread adoption of this or similar techniques will improve the quality of clinical studies that utilize cytometry.


Assuntos
Estudos Transversais , Alelos , Humanos , Reação em Cadeia da Polimerase em Tempo Real
10.
BMC Med Res Methodol ; 22(1): 176, 2022 06 23.
Artigo em Inglês | MEDLINE | ID: mdl-35739465

RESUMO

BACKGROUND: A lack of available data and statistical code being published alongside journal articles provides a significant barrier to open scientific discourse, and reproducibility of research. Information governance restrictions inhibit the active dissemination of individual level data to accompany published manuscripts. Realistic, high-fidelity time-to-event synthetic data can aid in the acceleration of methodological developments in survival analysis and beyond by enabling researchers to access and test published methods using data similar to that which they were developed on. METHODS: We present methods to accurately emulate the covariate patterns and survival times found in real-world datasets using synthetic data techniques, without compromising patient privacy. We model the joint covariate distribution of the original data using covariate specific sequential conditional regression models, then fit a complex flexible parametric survival model from which to generate survival times conditional on individual covariate patterns. We recreate the administrative censoring mechanism using the last observed follow-up date information from the initial dataset. Metrics for evaluating the accuracy of the synthetic data, and the non-identifiability of individuals from the original dataset, are presented. RESULTS: We successfully create a synthetic version of an example colon cancer dataset consisting of 9064 patients which aims to show good similarity to both covariate distributions and survival times from the original data, without containing any exact information from the original data, therefore allowing them to be published openly alongside research. CONCLUSIONS: We evaluate the effectiveness of the methods for constructing synthetic data, as well as providing evidence that there is minimal risk that a given patient from the original data could be identified from their individual unique patient information. Synthetic datasets using this methodology could be made available alongside published research without breaching data privacy protocols, and allow for data and code to be made available alongside methodological or applied manuscripts to greatly improve the transparency and accessibility of medical research.


Assuntos
Pesquisa Biomédica , Humanos , Reprodutibilidade dos Testes , Análise de Sobrevida
11.
BMC Bioinformatics ; 22(1): 610, 2021 Dec 23.
Artigo em Inglês | MEDLINE | ID: mdl-34949163

RESUMO

BACKGROUND: The interpretation of results from transcriptome profiling experiments via RNA sequencing (RNA-seq) can be a complex task, where the essential information is distributed among different tabular and list formats-normalized expression values, results from differential expression analysis, and results from functional enrichment analyses. A number of tools and databases are widely used for the purpose of identification of relevant functional patterns, yet often their contextualization within the data and results at hand is not straightforward, especially if these analytic components are not combined together efficiently. RESULTS: We developed the GeneTonic software package, which serves as a comprehensive toolkit for streamlining the interpretation of functional enrichment analyses, by fully leveraging the information of expression values in a differential expression context. GeneTonic is implemented in R and Shiny, leveraging packages that enable HTML-based interactive visualizations for executing drilldown tasks seamlessly, viewing the data at a level of increased detail. GeneTonic is integrated with the core classes of existing Bioconductor workflows, and can accept the output of many widely used tools for pathway analysis, making this approach applicable to a wide range of use cases. Users can effectively navigate interlinked components (otherwise available as flat text or spreadsheet tables), bookmark features of interest during the exploration sessions, and obtain at the end a tailored HTML report, thus combining the benefits of both interactivity and reproducibility. CONCLUSION: GeneTonic is distributed as an R package in the Bioconductor project ( https://bioconductor.org/packages/GeneTonic/ ) under the MIT license. Offering both bird's-eye views of the components of transcriptome data analysis and the detailed inspection of single genes, individual signatures, and their relationships, GeneTonic aims at simplifying the process of interpretation of complex and compelling RNA-seq datasets for many researchers with different expertise profiles.


Assuntos
RNA , Software , Sequência de Bases , Reprodutibilidade dos Testes , Análise de Sequência de RNA
12.
J Proteome Res ; 20(1): 1063-1069, 2021 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-32902283

RESUMO

We present version 2 of the MSnbase R/Bioconductor package. MSnbase provides infrastructure for the manipulation, processing, and visualization of mass spectrometry data. We focus on the new on-disk infrastructure, that allows the handling of large raw mass spectrometry experiments on commodity hardware and illustrate how the package is used for elegant data processing, method development, and visualization.


Assuntos
Proteômica , Software , Espectrometria de Massas
13.
J Comput Chem ; 42(18): 1321-1331, 2021 07 05.
Artigo em Inglês | MEDLINE | ID: mdl-33931885

RESUMO

We introduce a new Python interface for the Cassandra Monte Carlo software, molecular simulation design framework (MoSDeF) Cassandra. MoSDeF Cassandra provides a simplified user interface, offers broader interoperability with other molecular simulation codes, enables the construction of programmatic and reproducible molecular simulation workflows, and builds the infrastructure necessary for high-throughput Monte Carlo studies. Many of the capabilities of MoSDeF Cassandra are enabled via tight integration with MoSDeF. We discuss the motivation and design of MoSDeF Cassandra and proceed to demonstrate both simple use-cases and more complex workflows, including adsorption in porous media and a combined molecular dynamics - Monte Carlo workflow for computing lateral diffusivity in graphene slit pores. The examples presented herein demonstrate how even relatively complex simulation workflows can be reduced to, at most, a few files of Python code that can be version-controlled and shared with other researchers. We believe this paradigm will enable more rapid research advances and represents the future of molecular simulations.

14.
Expert Rev Proteomics ; 18(10): 835-843, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34602016

RESUMO

INTRODUCTION: Mass spectrometry-based proteomics is actively embracing quantitative, single-cell level analyses. Indeed, recent advances in sample preparation and mass spectrometry (MS) have enabled the emergence of quantitative MS-based single-cell proteomics (SCP). While exciting and promising, SCP still has many rough edges. The current analysis workflows are custom and built from scratch. The field is therefore craving for standardized software that promotes principled and reproducible SCP data analyses. AREAS COVERED: This special report is the first step toward the formalization and standardization of SCP data analysis. scp, the software that accompanies this work, successfully replicates one of the landmark SCP studies and is applicable to other experiments and designs. We created a repository containing the replicated workflow with comprehensive documentation in order to favor further dissemination and improvements of SCP data analyses. EXPERT OPINION: Replicating SCP data analyses uncovers important challenges in SCP data analysis. We describe two such challenges in detail: batch correction and data missingness. We provide the current state-of-the-art and illustrate the associated limitations. We also highlight the intimate dependence that exists between batch effects and data missingness and offer avenues for dealing with these exciting challenges.


Assuntos
Proteômica , Software , Biologia Computacional , Espectrometria de Massas , Fluxo de Trabalho
15.
J Anim Ecol ; 90(9): 2000-2004, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34525215

RESUMO

In Focus: Culina, A., Adriaensen, F., Bailey, L. D., et al. (2021) Connecting the data landscape of long-term ecological studies: The SPI-Birds data hub. Journal of Animal Ecology, https://doi.org/10.1111/1365-2656.13388. Long-term, individual-based datasets have been at the core of many key discoveries in ecology, and calls for the collection, curation and release of these kinds of ecological data are contributing to a flourishing open-data revolution in ecology. Birds, in particular, have been the focus of international research for decades, resulting in a number of uniquely long-term studies, but accessing these datasets has been historically challenging. Culina et al. (2021) introduce an online repository of individual-level, long-term bird records with ancillary data (e.g. genetics), which will enable key ecological questions to be answered on a global scale. As well as these opportunities, however, we argue that the ongoing open-data revolution comes with four key challenges relating to the (1) harmonisation of, (2) biases in, (3) expertise in and (4) communication of, open ecological data. Here, we discuss these challenges and how key efforts such as those by Culina et al. are using FAIR (Findable, Accessible, Interoperable and Reproducible) principles to overcome them. The open-data revolution will undoubtedly reshape our understanding of ecology, but with it the ecological community has a responsibility to ensure this revolution is ethical and effective.


Enfocado: Culina, A., Adriaensen, F., Bailey, L. D., et al. (2021) Connecting the data landscape of long-term ecological studies: the SPI-Birds data hub. Journal of Animal Ecology, https://doi.org/10.1111/1365-2656.13388. La información a largo plazo y a nivel de individuo ha cementado numerosos descubrimientos clave en la ecología, y las llamadas para la recopilación, conservación, y publicación de este tipo de datos ecológicos están contribuyendo a una revolución de información abierta en la ecología. Las aves, en particular, han sido el foco de la investigación internacional durante décadas, el cual ha resultado en una serie de estudios únicos a largo plazo. No obstante, historicamente el acceso libre a esta información ha representado un desafío importante. Culina y colegas (2021) presentan un repositorio online de registros de aves a nivel individual y de alta replicación temporal con metadatos (por ejemplo, genética) que permitirá explorar importantes preguntas ecológicas a grandes escalas espaciales. Sin embargo, además de las oportunidades presentadas en esta base de datos, argumentamos que la revolución de la información abierta viene con cuatro desafíos clave relacionados con (1) la armonización de, (2) los sesgos en, (3) la experiencia en y (4) la comunicación de información ecológica de forma abierta y transparente. Aquí discutimos estos desafíos y cómo esfuerzos clave como los de Culina y colaboradores están utilizando los principios FAIR (por sus siglas en inglés: Localizable, Accesible, Interoperable y Reproducible) para superarlos. La revolución de la información abierta sin duda remodelará nuestro entendimiento de la ecología. Sin embargo, la comunidad ecológica tiene la responsabilidad de garantizar que esta revolución sea ética y eficaz.


Assuntos
Aves , Ecologia , Animais , Estudos Longitudinais
16.
Proc Natl Acad Sci U S A ; 115(11): 2628-2631, 2018 03 13.
Artigo em Inglês | MEDLINE | ID: mdl-29531051

RESUMO

Efforts to improve the reproducibility and integrity of science are typically justified by a narrative of crisis, according to which most published results are unreliable due to growing problems with research and publication practices. This article provides an overview of recent evidence suggesting that this narrative is mistaken, and argues that a narrative of epochal changes and empowerment of scientists would be more accurate, inspiring, and compelling.

17.
Proc Natl Acad Sci U S A ; 115(11): 2584-2589, 2018 03 13.
Artigo em Inglês | MEDLINE | ID: mdl-29531050

RESUMO

A key component of scientific communication is sufficient information for other researchers in the field to reproduce published findings. For computational and data-enabled research, this has often been interpreted to mean making available the raw data from which results were generated, the computer code that generated the findings, and any additional information needed such as workflows and input parameters. Many journals are revising author guidelines to include data and code availability. This work evaluates the effectiveness of journal policy that requires the data and code necessary for reproducibility be made available postpublication by the authors upon request. We assess the effectiveness of such a policy by (i) requesting data and code from authors and (ii) attempting replication of the published findings. We chose a random sample of 204 scientific papers published in the journal Science after the implementation of their policy in February 2011. We found that we were able to obtain artifacts from 44% of our sample and were able to reproduce the findings for 26%. We find this policy-author remission of data and code postpublication upon request-an improvement over no policy, but currently insufficient for reproducibility.

18.
J Med Internet Res ; 23(10): e29259, 2021 10 29.
Artigo em Inglês | MEDLINE | ID: mdl-34714250

RESUMO

BACKGROUND: Electronic health records (EHRs, such as those created by an anesthesia management system) generate a large amount of data that can notably be reused for clinical audits and scientific research. The sharing of these data and tools is generally affected by the lack of system interoperability. To overcome these issues, Observational Health Data Sciences and Informatics (OHDSI) developed the Observational Medical Outcomes Partnership (OMOP) common data model (CDM) to standardize EHR data and promote large-scale observational and longitudinal research. Anesthesia data have not previously been mapped into the OMOP CDM. OBJECTIVE: The primary objective was to transform anesthesia data into the OMOP CDM. The secondary objective was to provide vocabularies, queries, and dashboards that might promote the exploitation and sharing of anesthesia data through the CDM. METHODS: Using our local anesthesia data warehouse, a group of 5 experts from 5 different medical centers identified local concepts related to anesthesia. The concepts were then matched with standard concepts in the OHDSI vocabularies. We performed structural mapping between the design of our local anesthesia data warehouse and the OMOP CDM tables and fields. To validate the implementation of anesthesia data into the OMOP CDM, we developed a set of queries and dashboards. RESULTS: We identified 522 concepts related to anesthesia care. They were classified as demographics, units, measurements, operating room steps, drugs, periods of interest, and features. After semantic mapping, 353 (67.7%) of these anesthesia concepts were mapped to OHDSI concepts. Further, 169 (32.3%) concepts related to periods and features were added to the OHDSI vocabularies. Then, 8 OMOP CDM tables were implemented with anesthesia data and 2 new tables (EPISODE and FEATURE) were added to store secondarily computed data. We integrated data from 5,72,609 operations and provided the code for a set of 8 queries and 4 dashboards related to anesthesia care. CONCLUSIONS: Generic data concerning demographics, drugs, units, measurements, and operating room steps were already available in OHDSI vocabularies. However, most of the intraoperative concepts (the duration of specific steps, an episode of hypotension, etc) were not present in OHDSI vocabularies. The OMOP mapping provided here enables anesthesia data reuse.


Assuntos
Anestesia , Informática Médica , Ciência de Dados , Bases de Dados Factuais , Registros Eletrônicos de Saúde , Hospitais , Humanos
19.
Sensors (Basel) ; 21(11)2021 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-34073282

RESUMO

Computer Vision is a cross-research field with the main purpose of understanding the surrounding environment as closely as possible to human perception. The image processing systems is continuously growing and expanding into more complex systems, usually tailored to the certain needs or applications it may serve. To better serve this purpose, research on the architecture and design of such systems is also important. We present the End-to-End Computer Vision Framework, an open-source solution that aims to support researchers and teachers within the image processing vast field. The framework has incorporated Computer Vision features and Machine Learning models that researchers can use. In the continuous need to add new Computer Vision algorithms for a day-to-day research activity, our proposed framework has an advantage given by the configurable and scalar architecture. Even if the main focus of the framework is on the Computer Vision processing pipeline, the framework offers solutions to incorporate even more complex activities, such as training Machine Learning models. EECVF aims to become a useful tool for learning activities in the Computer Vision field, as it allows the learner and the teacher to handle only the topics at hand, and not the interconnection necessary for visual processing flow.

20.
BMC Bioinformatics ; 21(1): 565, 2020 Dec 09.
Artigo em Inglês | MEDLINE | ID: mdl-33297942

RESUMO

BACKGROUND: RNA sequencing (RNA-seq) is an ever increasingly popular tool for transcriptome profiling. A key point to make the best use of the available data is to provide software tools that are easy to use but still provide flexibility and transparency in the adopted methods. Despite the availability of many packages focused on detecting differential expression, a method to streamline this type of bioinformatics analysis in a comprehensive, accessible, and reproducible way is lacking. RESULTS: We developed the ideal software package, which serves as a web application for interactive and reproducible RNA-seq analysis, while producing a wealth of visualizations to facilitate data interpretation. ideal is implemented in R using the Shiny framework, and is fully integrated with the existing core structures of the Bioconductor project. Users can perform the essential steps of the differential expression analysis workflow in an assisted way, and generate a broad spectrum of publication-ready outputs, including diagnostic and summary visualizations in each module, all the way down to functional analysis. ideal also offers the possibility to seamlessly generate a full HTML report for storing and sharing results together with code for reproducibility. CONCLUSION: ideal is distributed as an R package in the Bioconductor project ( http://bioconductor.org/packages/ideal/ ), and provides a solution for performing interactive and reproducible analyses of summarized RNA-seq expression data, empowering researchers with many different profiles (life scientists, clinicians, but also experienced bioinformaticians) to make the ideal use of the data at hand.


Assuntos
Perfilação da Expressão Gênica , Software , Sequência de Bases , Interpretação Estatística de Dados , Regulação da Expressão Gênica , Humanos , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA