Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
1.
PLoS One ; 19(1): e0296725, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38285635

RESUMO

Convolutional neural networks (CNNs) are currently among the most widely-used deep neural network (DNN) architectures available and achieve state-of-the-art performance for many problems. Originally applied to computer vision tasks, CNNs work well with any data with a spatial relationship, besides images, and have been applied to different fields. However, recent works have highlighted numerical stability challenges in DNNs, which also relates to their known sensitivity to noise injection. These challenges can jeopardise their performance and reliability. This paper investigates DeepGOPlus, a CNN that predicts protein function. DeepGOPlus has achieved state-of-the-art performance and can successfully take advantage and annotate the abounding protein sequences emerging in proteomics. We determine the numerical stability of the model's inference stage by quantifying the numerical uncertainty resulting from perturbations of the underlying floating-point data. In addition, we explore the opportunity to use reduced-precision floating point formats for DeepGOPlus inference, to reduce memory consumption and latency. This is achieved by instrumenting DeepGOPlus' execution using Monte Carlo Arithmetic, a technique that experimentally quantifies floating point operation errors and VPREC, a tool that emulates results with customizable floating point precision formats. Focus is placed on the inference stage as it is the primary deliverable of the DeepGOPlus model, widely applicable across different environments. All in all, our results show that although the DeepGOPlus CNN is very stable numerically, it can only be selectively implemented with lower-precision floating-point formats. We conclude that predictions obtained from the pre-trained DeepGOPlus model are very reliable numerically, and use existing floating-point formats efficiently.


Assuntos
Redes Neurais de Computação , Proteínas , Reprodutibilidade dos Testes , Sequência de Aminoácidos , Método de Monte Carlo
2.
PLoS One ; 19(1): e0295069, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38295031

RESUMO

CONTEXT: An existing major challenge in Parkinson's disease (PD) research is the identification of biomarkers of disease progression. While magnetic resonance imaging is a potential source of PD biomarkers, none of the magnetic resonance imaging measures of PD are robust enough to warrant their adoption in clinical research. This study is part of a project that aims to replicate 11 PD studies reviewed in a recent survey (JAMA neurology, 78(10) 2021) to investigate the robustness of PD neuroimaging findings to data and analytical variations. OBJECTIVE: This study attempts to replicate the results in Hanganu et al. (Brain, 137(4) 2014) using data from the Parkinson's Progression Markers Initiative (PPMI). METHODS: Using 25 PD subjects and 18 healthy controls, we analyzed the rate of change of cortical thickness and of the volume of subcortical structures, and we measured the relationship between structural changes and cognitive decline. We compared our findings to the results in the original study. RESULTS: (1) Similarly to the original study, PD patients with mild cognitive impairment (MCI) exhibited increased cortical thinning over time compared to patients without MCI in the right middle temporal gyrus, insula, and precuneus. (2) The rate of cortical thinning in the left inferior temporal and precentral gyri in PD patients correlated with the change in cognitive performance. (3) There were no group differences in the change of subcortical volumes. (4) We did not find a relationship between the change in subcortical volumes and the change in cognitive performance. CONCLUSION: Despite important differences in the dataset used in this replication study, and despite differences in sample size, we were able to partially replicate the original results. We produced a publicly available reproducible notebook allowing researchers to further investigate the reproducibility of the results in Hanganu et al. (2014) when more data is added to PPMI.


Assuntos
Disfunção Cognitiva , Doença de Parkinson , Humanos , Doença de Parkinson/patologia , Córtex Cerebral/patologia , Afinamento Cortical Cerebral/patologia , Reprodutibilidade dos Testes , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Disfunção Cognitiva/patologia , Imageamento por Ressonância Magnética , Biomarcadores
3.
Sci Data ; 10(1): 189, 2023 04 06.
Artigo em Inglês | MEDLINE | ID: mdl-37024500

RESUMO

We present the Canadian Open Neuroscience Platform (CONP) portal to answer the research community's need for flexible data sharing resources and provide advanced tools for search and processing infrastructure capacity. This portal differs from previous data sharing projects as it integrates datasets originating from a number of already existing platforms or databases through DataLad, a file level data integrity and access layer. The portal is also an entry point for searching and accessing a large number of standardized and containerized software and links to a computing infrastructure. It leverages community standards to help document and facilitate reuse of both datasets and tools, and already shows a growing community adoption giving access to more than 60 neuroscience datasets and over 70 tools. The CONP portal demonstrates the feasibility and offers a model of a distributed data and tool management system across 17 institutions throughout Canada.


Assuntos
Bases de Dados Factuais , Software , Canadá , Disseminação de Informação
4.
PLoS One ; 16(11): e0250755, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34724000

RESUMO

The analysis of brain-imaging data requires complex processing pipelines to support findings on brain function or pathologies. Recent work has shown that variability in analytical decisions, small amounts of noise, or computational environments can lead to substantial differences in the results, endangering the trust in conclusions. We explored the instability of results by instrumenting a structural connectome estimation pipeline with Monte Carlo Arithmetic to introduce random noise throughout. We evaluated the reliability of the connectomes, the robustness of their features, and the eventual impact on analysis. The stability of results was found to range from perfectly stable (i.e. all digits of data significant) to highly unstable (i.e. 0 - 1 significant digits). This paper highlights the potential of leveraging induced variance in estimates of brain connectivity to reduce the bias in networks without compromising reliability, alongside increasing the robustness and potential upper-bound of their applications in the classification of individual differences. We demonstrate that stability evaluations are necessary for understanding error inherent to brain imaging experiments, and how numerical analysis can be applied to typical analytical workflows both in brain imaging and other domains of computational sciences, as the techniques used were data and context agnostic and globally relevant. Overall, while the extreme variability in results due to analytical instabilities could severely hamper our understanding of brain organization, it also affords us the opportunity to increase the robustness of findings.


Assuntos
Encéfalo/fisiologia , Conectoma , Modelos Neurológicos , Rede Nervosa/fisiologia , Humanos , Incerteza
5.
Neuroimage ; 244: 118589, 2021 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-34563682

RESUMO

MRI plays a crucial role in multiple sclerosis diagnostic and patient follow-up. In particular, the delineation of T2-FLAIR hyperintense lesions is crucial although mostly performed manually - a tedious task. Many methods have thus been proposed to automate this task. However, sufficiently large datasets with a thorough expert manual segmentation are still lacking to evaluate these methods. We present a unique dataset for MS lesions segmentation evaluation. It consists of 53 patients acquired on 4 different scanners with a harmonized protocol. Hyperintense lesions on FLAIR were manually delineated on each patient by 7 experts with control on T2 sequence, and gathered in a consensus segmentation for evaluation. We provide raw and preprocessed data and a split of the dataset into training and testing data, the latter including data from a scanner not present in the training dataset. We strongly believe that this dataset will become a reference in MS lesions segmentation evaluation, allowing to evaluate many aspects: evaluation of performance on unseen scanner, comparison to individual experts performance, comparison to other challengers who already used this dataset, etc.


Assuntos
Imageamento por Ressonância Magnética/métodos , Esclerose Múltipla/diagnóstico por imagem , Adulto , Conjuntos de Dados como Assunto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
6.
Gigascience ; 10(8)2021 08 20.
Artigo em Inglês | MEDLINE | ID: mdl-34414422

RESUMO

As the global health crisis unfolded, many academic conferences moved online in 2020. This move has been hailed as a positive step towards inclusivity in its attenuation of economic, physical, and legal barriers and effectively enabled many individuals from groups that have traditionally been underrepresented to join and participate. A number of studies have outlined how moving online made it possible to gather a more global community and has increased opportunities for individuals with various constraints, e.g., caregiving responsibilities. Yet, the mere existence of online conferences is no guarantee that everyone can attend and participate meaningfully. In fact, many elements of an online conference are still significant barriers to truly diverse participation: the tools used can be inaccessible for some individuals; the scheduling choices can favour some geographical locations; the set-up of the conference can provide more visibility to well-established researchers and reduce opportunities for early-career researchers. While acknowledging the benefits of an online setting, especially for individuals who have traditionally been underrepresented or excluded, we recognize that fostering social justice requires inclusivity to actively be centered in every aspect of online conference design. Here, we draw from the literature and from our own experiences to identify practices that purposefully encourage a diverse community to attend, participate in, and lead online conferences. Reflecting on how to design more inclusive online events is especially important as multiple scientific organizations have announced that they will continue offering an online version of their event when in-person conferences can resume.

7.
Elife ; 102021 08 25.
Artigo em Inglês | MEDLINE | ID: mdl-34431476

RESUMO

Neuroimaging stands to benefit from emerging ultrahigh-resolution 3D histological atlases of the human brain; the first of which is 'BigBrain'. Here, we review recent methodological advances for the integration of BigBrain with multi-modal neuroimaging and introduce a toolbox, 'BigBrainWarp', that combines these developments. The aim of BigBrainWarp is to simplify workflows and support the adoption of best practices. This is accomplished with a simple wrapper function that allows users to easily map data between BigBrain and standard MRI spaces. The function automatically pulls specialised transformation procedures, based on ongoing research from a wide collaborative network of researchers. Additionally, the toolbox improves accessibility of histological information through dissemination of ready-to-use cytoarchitectural features. Finally, we demonstrate the utility of BigBrainWarp with three tutorials and discuss the potential of the toolbox to support multi-scale investigations of brain organisation.


Assuntos
Encéfalo/diagnóstico por imagem , Imageamento Tridimensional/métodos , Neuroimagem/métodos , Software , Idoso , Atlas como Assunto , Humanos , Imageamento por Ressonância Magnética , Masculino
8.
Gigascience ; 10(6)2021 06 03.
Artigo em Inglês | MEDLINE | ID: mdl-34080631

RESUMO

BACKGROUND: Software containers greatly facilitate the deployment and reproducibility of scientific data analyses in various platforms. However, container images often contain outdated or unnecessary software packages, which increases the number of security vulnerabilities in the images, widens the attack surface in the container host, and creates substantial security risks for computing infrastructures at large. This article presents a vulnerability analysis of container images for scientific data analysis. We compare results obtained with 4 vulnerability scanners, focusing on the use case of neuroscience data analysis, and quantifying the effect of image update and minification on the number of vulnerabilities. RESULTS: We find that container images used for neuroscience data analysis contain hundreds of vulnerabilities, that software updates remove roughly two-thirds of these vulnerabilities, and that removing unused packages is also effective. CONCLUSIONS: We provide recommendations on how to build container images with fewer vulnerabilities.


Assuntos
Análise de Dados , Software , Reprodutibilidade dos Testes
9.
Front Psychiatry ; 12: 746477, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34975566

RESUMO

The value of understanding patients' illness experience and social contexts for advancing medicine and clinical care is widely acknowledged. However, methodologies for rigorous and inclusive data gathering and integrative analysis of biomedical, cultural, and social factors are limited. In this paper, we propose a digital strategy for large-scale qualitative health research, using play (as a state of being, a communication mode or context, and a set of imaginative, expressive, and game-like activities) as a research method for recursive learning and action planning. Our proposal builds on Gregory Bateson's cybernetic approach to knowledge production. Using chronic pain as an example, we show how pragmatic, structural and cultural constraints that define the relationship of patients to the healthcare system can give rise to conflicted messaging that impedes inclusive health research. We then review existing literature to illustrate how different types of play including games, chatbots, virtual worlds, and creative art making can contribute to research in chronic pain. Inspired by Frederick Steier's application of Bateson's theory to designing a science museum, we propose DiSPORA (Digital Strategy for Play-Oriented Research and Action), a virtual citizen science laboratory which provides a framework for delivering health information, tools for play-based experimentation, and data collection capacity, but is flexible in allowing participants to choose the mode and the extent of their interaction. Combined with other data management platforms used in epidemiological studies of neuropsychiatric illness, DiSPORA offers a tool for large-scale qualitative research, digital phenotyping, and advancing personalized medicine.

10.
Gigascience ; 9(12)2020 12 02.
Artigo em Inglês | MEDLINE | ID: mdl-33269388

RESUMO

BACKGROUND: Data analysis pipelines are known to be affected by computational conditions, presumably owing to the creation and propagation of numerical errors. While this process could play a major role in the current reproducibility crisis, the precise causes of such instabilities and the path along which they propagate in pipelines are unclear. METHOD: We present Spot, a tool to identify which processes in a pipeline create numerical differences when executed in different computational conditions. Spot leverages system-call interception through ReproZip to reconstruct and compare provenance graphs without pipeline instrumentation. RESULTS: By applying Spot to the structural pre-processing pipelines of the Human Connectome Project, we found that linear and non-linear registration are the cause of most numerical instabilities in these pipelines, which confirms previous findings.


Assuntos
Conectoma , Análise de Dados , Humanos , Reprodutibilidade dos Testes
11.
Sensors (Basel) ; 20(22)2020 Nov 13.
Artigo em Inglês | MEDLINE | ID: mdl-33202905

RESUMO

This paper evaluates data stream classifiers from the perspective of connected devices, focusing on the use case of Human Activity Recognition. We measure both the classification performance and resource consumption (runtime, memory, and power) of five usual stream classification algorithms, implemented in a consistent library, and applied to two real human activity datasets and three synthetic datasets. Regarding classification performance, the results show the overall superiority of the Hoeffding Tree, the Mondrian forest, and the Naïve Bayes classifiers over the Feedforward Neural Network and the Micro Cluster Nearest Neighbor classifiers on four datasets out of six, including the real ones. In addition, the Hoeffding Tree and-to some extent-the Micro Cluster Nearest Neighbor, are the only classifiers that can recover from a concept drift. Overall, the three leading classifiers still perform substantially worse than an offline classifier on the real datasets. Regarding resource consumption, the Hoeffding Tree and the Mondrian forest are the most memory intensive and have the longest runtime; however, no difference in power consumption is found between classifiers. We conclude that stream learning for Human Activity Recognition on connected objects is challenged by two factors which could lead to interesting future work: a high memory consumption and low F1 scores overall.


Assuntos
Algoritmos , Atividades Humanas , Redes Neurais de Computação , Teorema de Bayes , Humanos
12.
Front Neuroinform ; 14: 33, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32848689

RESUMO

The Tomographic Quantitative Electroencephalography (qEEGt) toolbox is integrated with the Montreal Neurological Institute (MNI) Neuroinformatics Ecosystem as a docker into the Canadian Brain Imaging Research Platform (CBRAIN). qEEGt produces age-corrected normative Statistical Parametric Maps of EEG log source spectra testing compliance to a normative database. This toolbox was developed at the Cuban Neuroscience Center as part of the first wave of the Cuban Human Brain Mapping Project (CHBMP) and has been validated and used in different health systems for several decades. Incorporation into the MNI ecosystem now provides CBRAIN registered users access to its full functionality and is accompanied by a public release of the source code on GitHub and Zenodo repositories. Among other features are the calculation of EEG scalp spectra, and the estimation of their source spectra using the Variable Resolution Electrical Tomography (VARETA) source imaging. Crucially, this is completed by the evaluation of z spectra by means of the built-in age regression equations obtained from the CHBMP database (ages 5-87) to provide normative Statistical Parametric Mapping of EEG log source spectra. Different scalp and source visualization tools are also provided for evaluation of individual subjects prior to further post-processing. Openly releasing this software in the CBRAIN platform will facilitate the use of standardized qEEGt methods in different research and clinical settings. An updated precis of the methods is provided in Appendix I as a reference for the toolbox. qEEGt/CBRAIN is the first installment of instruments developed by the neuroinformatic platform of the Cuba-Canada-China (CCC) project.

13.
Int J High Perform Comput Appl ; 34(5): 491-501, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32831546

RESUMO

With an increase in awareness regarding a troubling lack of reproducibility in analytical software tools, the degree of validity in scientific derivatives and their downstream results has become unclear. The nature of reproducibility issues may vary across domains, tools, data sets, and computational infrastructures, but numerical instabilities are thought to be a core contributor. In neuroimaging, unexpected deviations have been observed when varying operating systems, software implementations, or adding negligible quantities of noise. In the field of numerical analysis, these issues have recently been explored through Monte Carlo Arithmetic, a method involving the instrumentation of floating-point operations with probabilistic noise injections at a target precision. Exploring multiple simulations in this context allows the characterization of the result space for a given tool or operation. In this article, we compare various perturbation models to introduce instabilities within a typical neuroimaging pipeline, including (i) targeted noise, (ii) Monte Carlo Arithmetic, and (iii) operating system variation, to identify the significance and quality of their impact on the resulting derivatives. We demonstrate that even low-order models in neuroimaging such as the structural connectome estimation pipeline evaluated here are sensitive to numerical instabilities, suggesting that stability is a relevant axis upon which tools are compared, alongside more traditional criteria such as biological feasibility, computational efficiency, or, when possible, accuracy. Heterogeneity was observed across participants which clearly illustrates a strong interaction between the tool and data set being processed, requiring that the stability of a given tool be evaluated with respect to a given cohort. We identify use cases for each perturbation method tested, including quality assurance, pipeline error detection, and local sensitivity analysis, and make recommendations for the evaluation of stability in a practical and analytically focused setting. Identifying how these relationships and recommendations scale to higher order computational tools, distinct data sets, and their implication on biological feasibility remain exciting avenues for future work.

14.
Sensors (Basel) ; 19(22)2019 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-31752158

RESUMO

The sliding window technique is widely used to segment inertial sensor signals, i.e., accelerometers and gyroscopes, for activity recognition. In this technique, the sensor signals are partitioned into fix sized time windows which can be of two types: (1) non-overlapping windows, in which time windows do not intersect, and (2) overlapping windows, in which they do. There is a generalized idea about the positive impact of using overlapping sliding windows on the performance of recognition systems in Human Activity Recognition. In this paper, we analyze the impact of overlapping sliding windows on the performance of Human Activity Recognition systems with different evaluation techniques, namely, subject-dependent cross validation and subject-independent cross validation. Our results show that the performance improvements regarding overlapping windowing reported in the literature seem to be associated with the underlying limitations of subject-dependent cross validation. Furthermore, we do not observe any performance gain from the use of such technique in conjunction with subject-independent cross validation. We conclude that when using subject-independent cross validation, non-overlapping sliding windows reach the same performance as sliding windows. This result has significant implications on the resource usage for training the human activity recognition systems.


Assuntos
Acelerometria/instrumentação , Algoritmos , Atividades Humanas , Reconhecimento Automatizado de Padrão , Adolescente , Adulto , Bases de Dados como Assunto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Redes Neurais de Computação , Adulto Jovem
15.
Front Neuroinform ; 13: 12, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30890927

RESUMO

Neuroscience has been carried into the domain of big data and high performance computing (HPC) on the backs of initiatives in data collection and an increasingly compute-intensive tools. While managing HPC experiments requires considerable technical acumen, platforms, and standards have been developed to ease this burden on scientists. While web-portals make resources widely accessible, data organizations such as the Brain Imaging Data Structure and tool description languages such as Boutiques provide researchers with a foothold to tackle these problems using their own datasets, pipelines, and environments. While these standards lower the barrier to adoption of HPC and cloud systems for neuroscience applications, they still require the consolidation of disparate domain-specific knowledge. We present Clowdr, a lightweight tool to launch experiments on HPC systems and clouds, record rich execution records, and enable the accessible sharing and re-launch of experimental summaries and results. Clowdr uniquely sits between web platforms and bare-metal applications for experiment management by preserving the flexibility of do-it-yourself solutions while providing a low barrier for developing, deploying and disseminating neuroscientific analysis.

16.
Sci Rep ; 8(1): 13650, 2018 09 12.
Artigo em Inglês | MEDLINE | ID: mdl-30209345

RESUMO

We present a study of multiple sclerosis segmentation algorithms conducted at the international MICCAI 2016 challenge. This challenge was operated using a new open-science computing infrastructure. This allowed for the automatic and independent evaluation of a large range of algorithms in a fair and completely automatic manner. This computing infrastructure was used to evaluate thirteen methods of MS lesions segmentation, exploring a broad range of state-of-theart algorithms, against a high-quality database of 53 MS cases coming from four centers following a common definition of the acquisition protocol. Each case was annotated manually by an unprecedented number of seven different experts. Results of the challenge highlighted that automatic algorithms, including the recent machine learning methods (random forests, deep learning, …), are still trailing human expertise on both detection and delineation criteria. In addition, we demonstrate that computing a statistically robust consensus of the algorithms performs closer to human expertise on one score (segmentation) although still trailing on detection scores.


Assuntos
Algoritmos , Imageamento por Ressonância Magnética/métodos , Esclerose Múltipla/diagnóstico por imagem , Esclerose Múltipla/diagnóstico , Tecido Parenquimatoso/diagnóstico por imagem , Feminino , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Masculino , Esclerose Múltipla/patologia , Redes Neurais de Computação , Tecido Parenquimatoso/patologia , Estudos Retrospectivos
17.
Gigascience ; 7(5)2018 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-29718199

RESUMO

We present Boutiques, a system to automatically publish, integrate, and execute command-line applications across computational platforms. Boutiques applications are installed through software containers described in a rich and flexible JSON language. A set of core tools facilitates the construction, validation, import, execution, and publishing of applications. Boutiques is currently supported by several distinct virtual research platforms, and it has been used to describe dozens of applications in the neuroinformatics domain. We expect Boutiques to improve the quality of application integration in computational platforms, to reduce redundancy of effort, to contribute to computational reproducibility, and to foster Open Science.


Assuntos
Biologia Computacional/métodos , Software , Encéfalo/diagnóstico por imagem , Humanos , Neuroimagem , Reprodutibilidade dos Testes
18.
Med Image Anal ; 44: 177-195, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-29268169

RESUMO

INTRODUCTION: Automatic functional volume segmentation in PET images is a challenge that has been addressed using a large array of methods. A major limitation for the field has been the lack of a benchmark dataset that would allow direct comparison of the results in the various publications. In the present work, we describe a comparison of recent methods on a large dataset following recommendations by the American Association of Physicists in Medicine (AAPM) task group (TG) 211, which was carried out within a MICCAI (Medical Image Computing and Computer Assisted Intervention) challenge. MATERIALS AND METHODS: Organization and funding was provided by France Life Imaging (FLI). A dataset of 176 images combining simulated, phantom and clinical images was assembled. A website allowed the participants to register and download training data (n = 19). Challengers then submitted encapsulated pipelines on an online platform that autonomously ran the algorithms on the testing data (n = 157) and evaluated the results. The methods were ranked according to the arithmetic mean of sensitivity and positive predictive value. RESULTS: Sixteen teams registered but only four provided manuscripts and pipeline(s) for a total of 10 methods. In addition, results using two thresholds and the Fuzzy Locally Adaptive Bayesian (FLAB) were generated. All competing methods except one performed with median accuracy above 0.8. The method with the highest score was the convolutional neural network-based segmentation, which significantly outperformed 9 out of 12 of the other methods, but not the improved K-Means, Gaussian Model Mixture and Fuzzy C-Means methods. CONCLUSION: The most rigorous comparative study of PET segmentation algorithms to date was carried out using a dataset that is the largest used in such studies so far. The hierarchy amongst the methods in terms of accuracy did not depend strongly on the subset of datasets or the metrics (or combination of metrics). All the methods submitted by the challengers except one demonstrated good performance with median accuracy scores above 0.8.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias/diagnóstico por imagem , Tomografia por Emissão de Pósitrons/métodos , Teorema de Bayes , Lógica Fuzzy , Humanos , Aprendizado de Máquina , Redes Neurais de Computação , Imagens de Fantasmas , Valor Preditivo dos Testes , Sensibilidade e Especificidade
19.
Nat Neurosci ; 20(3): 299-303, 2017 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-28230846

RESUMO

Given concerns about the reproducibility of scientific findings, neuroimaging must define best practices for data analysis, results reporting, and algorithm and data sharing to promote transparency, reliability and collaboration. We describe insights from developing a set of recommendations on behalf of the Organization for Human Brain Mapping and identify barriers that impede these practices, including how the discipline must change to fully exploit the potential of the world's neuroimaging data.


Assuntos
Mapeamento Encefálico , Imageamento por Ressonância Magnética , Neuroimagem/métodos , Bases de Dados Factuais , Humanos , Disseminação de Informação/métodos , Imageamento por Ressonância Magnética/métodos , Reprodutibilidade dos Testes
20.
Sci Data ; 3: 160102, 2016 12 06.
Artigo em Inglês | MEDLINE | ID: mdl-27922621

RESUMO

Only a tiny fraction of the data and metadata produced by an fMRI study is finally conveyed to the community. This lack of transparency not only hinders the reproducibility of neuroimaging results but also impairs future meta-analyses. In this work we introduce NIDM-Results, a format specification providing a machine-readable description of neuroimaging statistical results along with key image data summarising the experiment. NIDM-Results provides a unified representation of mass univariate analyses including a level of detail consistent with available best practices. This standardized representation allows authors to relay methods and results in a platform-independent regularized format that is not tied to a particular neuroimaging software package. Tools are available to export NIDM-Result graphs and associated files from the widely used SPM and FSL software packages, and the NeuroVault repository can import NIDM-Results archives. The specification is publically available at: http://nidm.nidash.org/specs/nidm-results.html.


Assuntos
Mapeamento Encefálico/estatística & dados numéricos , Encéfalo/fisiologia , Disseminação de Informação/métodos , Imageamento por Ressonância Magnética/estatística & dados numéricos , Interpretação Estatística de Dados , Humanos , Armazenamento e Recuperação da Informação , Modelos Lineares , Metanálise como Assunto , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...