RESUMO
Neuroimaging research requires purpose-built analysis software, which is challenging to install and may produce different results across computing environments. The community-oriented, open-source Neurodesk platform ( https://www.neurodesk.org/ ) harnesses a comprehensive and growing suite of neuroimaging software containers. Neurodesk includes a browser-accessible virtual desktop, command-line interface and computational notebook compatibility, allowing for accessible, flexible, portable and fully reproducible neuroimaging analysis on personal workstations, high-performance computers and the cloud.
Assuntos
Neuroimagem , Software , Neuroimagem/métodos , Humanos , Interface Usuário-Computador , Reprodutibilidade dos Testes , Encéfalo/diagnóstico por imagemRESUMO
Neuroimaging data analysis often requires purpose-built software, which can be challenging to install and may produce different results across computing environments. Beyond being a roadblock to neuroscientists, these issues of accessibility and portability can hamper the reproducibility of neuroimaging data analysis pipelines. Here, we introduce the Neurodesk platform, which harnesses software containers to support a comprehensive and growing suite of neuroimaging software (https://www.neurodesk.org/). Neurodesk includes a browser-accessible virtual desktop environment and a command line interface, mediating access to containerized neuroimaging software libraries on various computing platforms, including personal and high-performance computers, cloud computing and Jupyter Notebooks. This community-oriented, open-source platform enables a paradigm shift for neuroimaging data analysis, allowing for accessible, flexible, fully reproducible, and portable data analysis pipelines.
RESUMO
As the global health crisis unfolded, many academic conferences moved online in 2020. This move has been hailed as a positive step towards inclusivity in its attenuation of economic, physical, and legal barriers and effectively enabled many individuals from groups that have traditionally been underrepresented to join and participate. A number of studies have outlined how moving online made it possible to gather a more global community and has increased opportunities for individuals with various constraints, e.g., caregiving responsibilities. Yet, the mere existence of online conferences is no guarantee that everyone can attend and participate meaningfully. In fact, many elements of an online conference are still significant barriers to truly diverse participation: the tools used can be inaccessible for some individuals; the scheduling choices can favour some geographical locations; the set-up of the conference can provide more visibility to well-established researchers and reduce opportunities for early-career researchers. While acknowledging the benefits of an online setting, especially for individuals who have traditionally been underrepresented or excluded, we recognize that fostering social justice requires inclusivity to actively be centered in every aspect of online conference design. Here, we draw from the literature and from our own experiences to identify practices that purposefully encourage a diverse community to attend, participate in, and lead online conferences. Reflecting on how to design more inclusive online events is especially important as multiple scientific organizations have announced that they will continue offering an online version of their event when in-person conferences can resume.
RESUMO
Simultaneous [18 F]-fluorodeoxyglucose positron emission tomography and functional magnetic resonance imaging (FDG-PET/fMRI) provides the capability to image two sources of energetic dynamics in the brain - cerebral glucose uptake and the cerebrovascular haemodynamic response. Resting-state fMRI connectivity has been enormously useful for characterising interactions between distributed brain regions in humans. Metabolic connectivity has recently emerged as a complementary measure to investigate brain network dynamics. Functional PET (fPET) is a new approach for measuring FDG uptake with high temporal resolution and has recently shown promise for assessing the dynamics of neural metabolism. Simultaneous fMRI/fPET is a relatively new hybrid imaging modality, with only a few biomedical imaging research facilities able to acquire FDG PET and BOLD fMRI data simultaneously. We present data for n = 27 healthy young adults (18-20 yrs) who underwent a 95-min simultaneous fMRI/fPET scan while resting with their eyes open. This dataset provides significant re-use value to understand the neural dynamics of glucose metabolism and the haemodynamic response, the synchrony, and interaction between these measures, and the development of new single- and multi-modality image preparation and analysis procedures.
Assuntos
Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética , Tomografia por Emissão de Pósitrons , Mapeamento Encefálico , Fluordesoxiglucose F18 , Humanos , Imagem Multimodal , DescansoRESUMO
Mastering the "arcana of neuroimaging analysis", the obscure knowledge required to apply an appropriate combination of software tools and parameters to analyse a given neuroimaging dataset, is a time consuming process. Therefore, it is not typically feasible to invest the additional effort required generalise workflow implementations to accommodate for the various acquisition parameters, data storage conventions and computing environments in use at different research sites, limiting the reusability of published workflows. We present a novel software framework, Abstraction of Repository-Centric ANAlysis (Arcana), which enables the development of complex, "end-to-end" workflows that are adaptable to new analyses and portable to a wide range of computing infrastructures. Analysis templates for specific image types (e.g. MRI contrast) are implemented as Python classes, which define a range of potential derivatives and analysis methods. Arcana retrieves data from imaging repositories, which can be BIDS datasets, XNAT instances or plain directories, and stores selected derivatives and associated provenance back into a repository for reuse by subsequent analyses. Workflows are constructed using Nipype and can be executed on local workstations or in high performance computing environments. Generic analysis methods can be consolidated within common base classes to facilitate code-reuse and collaborative development, which can be specialised for study-specific requirements via class inheritance. Arcana provides a framework in which to develop unified neuroimaging workflows that can be reused across a wide range of research studies and sites.
Assuntos
Encéfalo/diagnóstico por imagem , Armazenamento e Recuperação da Informação/estatística & dados numéricos , Imageamento por Ressonância Magnética/estatística & dados numéricos , Neuroimagem/estatística & dados numéricos , Análise de Dados , Humanos , Armazenamento e Recuperação da Informação/métodos , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodos , Software , Fluxo de TrabalhoRESUMO
BACKGROUND: Friedreich ataxia is a recessively inherited, progressive neurological disease characterized by impaired mitochondrial iron metabolism. The dentate nuclei of the cerebellum are characteristic sites of neurodegeneration in the disease, but little is known of the longitudinal progression of abnormalities in these structures. METHODS: Using in vivo magnetic resonance imaging, including quantitative susceptibility mapping, we investigated changes in iron concentration and volume in the dentate nuclei in individuals with Friedreich ataxia (n = 20) and healthy controls (n = 18) over a 2-year period. RESULTS: The longitudinal rate of iron concentration was significantly elevated bilaterally in participants with Friedreich ataxia relative to healthy controls. Atrophy rates did not differ significantly between groups. Change in iron concentration and atrophy both correlated with baseline disease severity or duration, indicating sensitivity of these measures to disease stage. Specifically, atrophy was maximal in individuals early in the disease course, whereas the rate of iron concentration increased with disease progression. CONCLUSIONS: Progressive dentate nucleus abnormalities are evident in vivo in Friedreich ataxia, and the rates of change of iron concentration and atrophy in these structures are sensitive to the disease stage. The findings are consistent with an increased rate of iron concentration and atrophy early in the disease, followed by iron accumulation and stable volume in later stages. This pattern suggests that iron dysregulation persists after loss of the vulnerable neurons in the dentate. The significant changes observed over a 2-year period highlight the utility of quantitative susceptibility mapping as a longitudinal biomarker and staging tool. © 2019 International Parkinson and Movement Disorder Society.
Assuntos
Núcleos Cerebelares/metabolismo , Ataxia de Friedreich/metabolismo , Ferro/metabolismo , Adulto , Atrofia/diagnóstico por imagem , Atrofia/metabolismo , Atrofia/patologia , Núcleos Cerebelares/diagnóstico por imagem , Núcleos Cerebelares/patologia , Progressão da Doença , Feminino , Ataxia de Friedreich/diagnóstico por imagem , Ataxia de Friedreich/patologia , Humanos , Estudos Longitudinais , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Adulto JovemRESUMO
Advances in experimental techniques and computational power allowing researchers to gather anatomical and electrophysiological data at unprecedented levels of detail have fostered the development of increasingly complex models in computational neuroscience. Large-scale, biophysically detailed cell models pose a particular set of computational challenges, and this has led to the development of a number of domain-specific simulators. At the other level of detail, the ever growing variety of point neuron models increases the implementation barrier even for those based on the relatively simple integrate-and-fire neuron model. Independently of the model complexity, all modeling methods crucially depend on an efficient and accurate transformation of mathematical model descriptions into efficiently executable code. Neuroscientists usually publish model descriptions in terms of the mathematical equations underlying them. However, actually simulating them requires they be translated into code. This can cause problems because errors may be introduced if this process is carried out by hand, and code written by neuroscientists may not be very computationally efficient. Furthermore, the translated code might be generated for different hardware platforms, operating system variants or even written in different languages and thus cannot easily be combined or even compared. Two main approaches to addressing this issues have been followed. The first is to limit users to a fixed set of optimized models, which limits flexibility. The second is to allow model definitions in a high level interpreted language, although this may limit performance. Recently, a third approach has become increasingly popular: using code generation to automatically translate high level descriptions into efficient low level code to combine the best of previous approaches. This approach also greatly enriches efforts to standardize simulator-independent model description languages. In the past few years, a number of code generation pipelines have been developed in the computational neuroscience community, which differ considerably in aim, scope and functionality. This article provides an overview of existing pipelines currently used within the community and contrasts their capabilities and the technologies and concepts behind them.
RESUMO
Diffusion MRI tractography algorithm development is increasingly moving towards global techniques to incorporate "downstream" information and conditional probabilities between neighbouring tracts. Such approaches also enable white matter to be represented more tangibly than the abstract lines generated by the most common approaches to fibre tracking. However, previously proposed algorithms still use fibre-like models of white matter corresponding to thin strands of white matter tracts rather than the tracts themselves, and therefore require many components for accurate representations, which leads to poorly constrained inverse problems. We propose a novel tract-based model of white matter, the 'Fourier tract', which is able to represent rich tract shapes with a relatively low number of parameters, and explicitly decouples the spatial extent of the modelled tract from its 'Apparent Connection Strength (ACS)'. The Fourier tract model is placed within a novel Bayesian framework, which relates the tract parameters directly to the observed signal, enabling a wide range of acquisition schemes to be used. The posterior distribution of the Bayesian framework is characterised via Markov-chain Monte-Carlo sampling to infer probable values of the ACS and spatial extent of the imaged white matter tracts, providing measures that can be directly applied to many research and clinical studies. The robustness of the proposed tractography algorithm is demonstrated on simulated basic tract configurations, such as curving, twisting, crossing and kissing tracts, and sections of more complex numerical phantoms. As an illustration of the approach in vivo, fibre tracking is performed on a central section of the brain in three subjects from 60 direction HARDI datasets.
Assuntos
Imagem de Difusão por Ressonância Magnética/métodos , Análise de Fourier , Processamento de Imagem Assistida por Computador/métodos , Fibras Nervosas Mielinizadas , Substância Branca/anatomia & histologia , Humanos , Modelos Estatísticos , Vias Neurais/anatomia & histologiaRESUMO
The assessment of Diffusion-Weighted MRI (DW-MRI) fibre-tracking algorithms has been limited by the lack of an appropriate 'gold standard'. Practical limitations of alternative methods and physical models have meant that numerical simulations have become the method of choice in practice. However, previous numerical phantoms have consisted of separate fibres embedded in homogeneous backgrounds, which do not capture the true nature of white matter. In this paper we describe a method that is able to randomly generate numerical structures consisting of densely packed bundles of fibres, which are much more representative of human white matter, and simulate the DW-MR images that would arise from them under many imaging conditions. User-defined parameters may be adjusted to produce structures with a range of complexities that spans the levels we would expect to find in vivo. These structures are shown to contain many different features that occur in human white matter and which could confound fibre-tracking algorithms, such as tract kissing and crossing. Furthermore, combinations of such features can be sampled by the random generation of many different structures with consistent levels of complexity. The proposed software provides means for quantitative assessment via direct comparison between tracking results and the exact location of the generated fibres. This should greatly improve our understanding of algorithm performance and therefore prove an important tool for fibre tracking development.