Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 38
Filtrar
1.
Neuroimage ; 244: 118589, 2021 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-34563682

RESUMO

MRI plays a crucial role in multiple sclerosis diagnostic and patient follow-up. In particular, the delineation of T2-FLAIR hyperintense lesions is crucial although mostly performed manually - a tedious task. Many methods have thus been proposed to automate this task. However, sufficiently large datasets with a thorough expert manual segmentation are still lacking to evaluate these methods. We present a unique dataset for MS lesions segmentation evaluation. It consists of 53 patients acquired on 4 different scanners with a harmonized protocol. Hyperintense lesions on FLAIR were manually delineated on each patient by 7 experts with control on T2 sequence, and gathered in a consensus segmentation for evaluation. We provide raw and preprocessed data and a split of the dataset into training and testing data, the latter including data from a scanner not present in the training dataset. We strongly believe that this dataset will become a reference in MS lesions segmentation evaluation, allowing to evaluate many aspects: evaluation of performance on unseen scanner, comparison to individual experts performance, comparison to other challengers who already used this dataset, etc.


Assuntos
Imageamento por Ressonância Magnética/métodos , Esclerose Múltipla/diagnóstico por imagem , Adulto , Conjuntos de Dados como Assunto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
2.
Sensors (Basel) ; 20(22)2020 Nov 13.
Artigo em Inglês | MEDLINE | ID: mdl-33202905

RESUMO

This paper evaluates data stream classifiers from the perspective of connected devices, focusing on the use case of Human Activity Recognition. We measure both the classification performance and resource consumption (runtime, memory, and power) of five usual stream classification algorithms, implemented in a consistent library, and applied to two real human activity datasets and three synthetic datasets. Regarding classification performance, the results show the overall superiority of the Hoeffding Tree, the Mondrian forest, and the Naïve Bayes classifiers over the Feedforward Neural Network and the Micro Cluster Nearest Neighbor classifiers on four datasets out of six, including the real ones. In addition, the Hoeffding Tree and-to some extent-the Micro Cluster Nearest Neighbor, are the only classifiers that can recover from a concept drift. Overall, the three leading classifiers still perform substantially worse than an offline classifier on the real datasets. Regarding resource consumption, the Hoeffding Tree and the Mondrian forest are the most memory intensive and have the longest runtime; however, no difference in power consumption is found between classifiers. We conclude that stream learning for Human Activity Recognition on connected objects is challenged by two factors which could lead to interesting future work: a high memory consumption and low F1 scores overall.


Assuntos
Algoritmos , Atividades Humanas , Redes Neurais de Computação , Teorema de Bayes , Humanos
3.
Int J High Perform Comput Appl ; 34(5): 491-501, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32831546

RESUMO

With an increase in awareness regarding a troubling lack of reproducibility in analytical software tools, the degree of validity in scientific derivatives and their downstream results has become unclear. The nature of reproducibility issues may vary across domains, tools, data sets, and computational infrastructures, but numerical instabilities are thought to be a core contributor. In neuroimaging, unexpected deviations have been observed when varying operating systems, software implementations, or adding negligible quantities of noise. In the field of numerical analysis, these issues have recently been explored through Monte Carlo Arithmetic, a method involving the instrumentation of floating-point operations with probabilistic noise injections at a target precision. Exploring multiple simulations in this context allows the characterization of the result space for a given tool or operation. In this article, we compare various perturbation models to introduce instabilities within a typical neuroimaging pipeline, including (i) targeted noise, (ii) Monte Carlo Arithmetic, and (iii) operating system variation, to identify the significance and quality of their impact on the resulting derivatives. We demonstrate that even low-order models in neuroimaging such as the structural connectome estimation pipeline evaluated here are sensitive to numerical instabilities, suggesting that stability is a relevant axis upon which tools are compared, alongside more traditional criteria such as biological feasibility, computational efficiency, or, when possible, accuracy. Heterogeneity was observed across participants which clearly illustrates a strong interaction between the tool and data set being processed, requiring that the stability of a given tool be evaluated with respect to a given cohort. We identify use cases for each perturbation method tested, including quality assurance, pipeline error detection, and local sensitivity analysis, and make recommendations for the evaluation of stability in a practical and analytically focused setting. Identifying how these relationships and recommendations scale to higher order computational tools, distinct data sets, and their implication on biological feasibility remain exciting avenues for future work.

4.
Sensors (Basel) ; 19(22)2019 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-31752158

RESUMO

The sliding window technique is widely used to segment inertial sensor signals, i.e., accelerometers and gyroscopes, for activity recognition. In this technique, the sensor signals are partitioned into fix sized time windows which can be of two types: (1) non-overlapping windows, in which time windows do not intersect, and (2) overlapping windows, in which they do. There is a generalized idea about the positive impact of using overlapping sliding windows on the performance of recognition systems in Human Activity Recognition. In this paper, we analyze the impact of overlapping sliding windows on the performance of Human Activity Recognition systems with different evaluation techniques, namely, subject-dependent cross validation and subject-independent cross validation. Our results show that the performance improvements regarding overlapping windowing reported in the literature seem to be associated with the underlying limitations of subject-dependent cross validation. Furthermore, we do not observe any performance gain from the use of such technique in conjunction with subject-independent cross validation. We conclude that when using subject-independent cross validation, non-overlapping sliding windows reach the same performance as sliding windows. This result has significant implications on the resource usage for training the human activity recognition systems.


Assuntos
Acelerometria/instrumentação , Algoritmos , Atividades Humanas , Reconhecimento Automatizado de Padrão , Adolescente , Adulto , Bases de Dados como Assunto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Redes Neurais de Computação , Adulto Jovem
5.
Neuroimage ; 124(Pt B): 1188-1195, 2016 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-26364860

RESUMO

Neuroimaging has been facing a data deluge characterized by the exponential growth of both raw and processed data. As a result, mining the massive quantities of digital data collected in these studies offers unprecedented opportunities and has become paramount for today's research. As the neuroimaging community enters the world of "Big Data", there has been a concerted push for enhanced sharing initiatives, whether within a multisite study, across studies, or federated and shared publicly. This article will focus on the database and processing ecosystem developed at the Montreal Neurological Institute (MNI) to support multicenter data acquisition both nationally and internationally, create database repositories, facilitate data-sharing initiatives, and leverage existing software toolkits for large-scale data processing.


Assuntos
Bases de Dados Factuais , Disseminação de Informação , Neuroimagem , Comportamento , Genômica , Humanos , Estudos Longitudinais , Controle de Qualidade , Software
6.
J Biomed Inform ; 52: 279-92, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25038553

RESUMO

This paper describes the creation of a comprehensive conceptualization of object models used in medical image simulation, suitable for major imaging modalities and simulators. The goal is to create an application ontology that can be used to annotate the models in a repository integrated in the Virtual Imaging Platform (VIP), to facilitate their sharing and reuse. Annotations make the anatomical, physiological and pathophysiological content of the object models explicit. In such an interdisciplinary context we chose to rely on a common integration framework provided by a foundational ontology, that facilitates the consistent integration of the various modules extracted from several existing ontologies, i.e. FMA, PATO, MPATH, RadLex and ChEBI. Emphasis is put on methodology for achieving this extraction and integration. The most salient aspects of the ontology are presented, especially the organization in model layers, as well as its use to browse and query the model repository.


Assuntos
Diagnóstico por Imagem , Processamento de Imagem Assistida por Computador/métodos , Internet , Semântica , Vocabulário Controlado , Encéfalo/patologia , Simulação por Computador , Humanos , Modelos Teóricos , Software
7.
PLoS One ; 19(1): e0296725, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38285635

RESUMO

Convolutional neural networks (CNNs) are currently among the most widely-used deep neural network (DNN) architectures available and achieve state-of-the-art performance for many problems. Originally applied to computer vision tasks, CNNs work well with any data with a spatial relationship, besides images, and have been applied to different fields. However, recent works have highlighted numerical stability challenges in DNNs, which also relates to their known sensitivity to noise injection. These challenges can jeopardise their performance and reliability. This paper investigates DeepGOPlus, a CNN that predicts protein function. DeepGOPlus has achieved state-of-the-art performance and can successfully take advantage and annotate the abounding protein sequences emerging in proteomics. We determine the numerical stability of the model's inference stage by quantifying the numerical uncertainty resulting from perturbations of the underlying floating-point data. In addition, we explore the opportunity to use reduced-precision floating point formats for DeepGOPlus inference, to reduce memory consumption and latency. This is achieved by instrumenting DeepGOPlus' execution using Monte Carlo Arithmetic, a technique that experimentally quantifies floating point operation errors and VPREC, a tool that emulates results with customizable floating point precision formats. Focus is placed on the inference stage as it is the primary deliverable of the DeepGOPlus model, widely applicable across different environments. All in all, our results show that although the DeepGOPlus CNN is very stable numerically, it can only be selectively implemented with lower-precision floating-point formats. We conclude that predictions obtained from the pre-trained DeepGOPlus model are very reliable numerically, and use existing floating-point formats efficiently.


Assuntos
Redes Neurais de Computação , Proteínas , Reprodutibilidade dos Testes , Sequência de Aminoácidos , Método de Monte Carlo
8.
PLoS One ; 19(1): e0295069, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38295031

RESUMO

CONTEXT: An existing major challenge in Parkinson's disease (PD) research is the identification of biomarkers of disease progression. While magnetic resonance imaging is a potential source of PD biomarkers, none of the magnetic resonance imaging measures of PD are robust enough to warrant their adoption in clinical research. This study is part of a project that aims to replicate 11 PD studies reviewed in a recent survey (JAMA neurology, 78(10) 2021) to investigate the robustness of PD neuroimaging findings to data and analytical variations. OBJECTIVE: This study attempts to replicate the results in Hanganu et al. (Brain, 137(4) 2014) using data from the Parkinson's Progression Markers Initiative (PPMI). METHODS: Using 25 PD subjects and 18 healthy controls, we analyzed the rate of change of cortical thickness and of the volume of subcortical structures, and we measured the relationship between structural changes and cognitive decline. We compared our findings to the results in the original study. RESULTS: (1) Similarly to the original study, PD patients with mild cognitive impairment (MCI) exhibited increased cortical thinning over time compared to patients without MCI in the right middle temporal gyrus, insula, and precuneus. (2) The rate of cortical thinning in the left inferior temporal and precentral gyri in PD patients correlated with the change in cognitive performance. (3) There were no group differences in the change of subcortical volumes. (4) We did not find a relationship between the change in subcortical volumes and the change in cognitive performance. CONCLUSION: Despite important differences in the dataset used in this replication study, and despite differences in sample size, we were able to partially replicate the original results. We produced a publicly available reproducible notebook allowing researchers to further investigate the reproducibility of the results in Hanganu et al. (2014) when more data is added to PPMI.


Assuntos
Disfunção Cognitiva , Doença de Parkinson , Humanos , Doença de Parkinson/patologia , Córtex Cerebral/patologia , Afinamento Cortical Cerebral/patologia , Reprodutibilidade dos Testes , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Disfunção Cognitiva/patologia , Imageamento por Ressonância Magnética , Biomarcadores
9.
PLoS One ; 19(6): e0289384, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38917084

RESUMO

Semantic memory representations are generally well maintained in aging, whereas semantic control is thought to be more affected. To explain this phenomenon, this study tested the predictions of the Compensation-Related Utilization of Neural Circuits Hypothesis (CRUNCH), focusing on task demands in aging as a possible framework. The CRUNCH effect would manifest itself in semantic tasks through a compensatory increase in neural activation in semantic control network regions but only up to a certain threshold of task demands. This study compares 39 younger (20-35 years old) with 39 older participants (60-75 years old) in a triad-based semantic judgment task performed in an fMRI scanner while manipulating task demand levels (low versus high) through semantic distance. In line with the CRUNCH predictions, differences in neurofunctional activation and behavioral performance (accuracy and response times) were expected in younger versus older participants in the low- versus high-demand conditions, which should be manifested in semantic control Regions of Interest (ROIs). Our older participants had intact behavioral performance, as proposed in the literature for semantic memory tasks (maintained accuracy and slower response times (RTs)). Age-invariant behavioral performance in the older group compared to the younger one is necessary to test the CRUNCH predictions. The older adults were also characterized by high cognitive reserve, as our neuropsychological tests showed. Our behavioral results confirmed that our task successfully manipulated task demands: error rates, RTs and perceived difficulty increased with increasing task demands in both age groups. We did not find an interaction between age group and task demand, or a statistically significant difference in activation between the low- and high-demand conditions for either RTs or accuracy. As for brain activation, we did not find the expected age group by task demand interaction, or a significant main effect of task demand. Overall, our results are compatible with some neural activation in the semantic network and the semantic control network, largely in frontotemporoparietal regions. ROI analyses demonstrated significant effects (but no interactions) of task demand in the left and right inferior frontal gyrus, the left posterior middle temporal gyrus, the posterior inferior temporal gyrus and the prefrontal gyrus. Overall, our test did not confirm the CRUNCH predictions.


Assuntos
Envelhecimento , Imageamento por Ressonância Magnética , Memória , Tempo de Reação , Semântica , Humanos , Adulto , Pessoa de Meia-Idade , Idoso , Masculino , Feminino , Envelhecimento/fisiologia , Memória/fisiologia , Adulto Jovem , Tempo de Reação/fisiologia , Mapeamento Encefálico , Rede Nervosa/fisiologia , Rede Nervosa/diagnóstico por imagem , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Publicação Pré-Registro
10.
Sci Data ; 10(1): 189, 2023 04 06.
Artigo em Inglês | MEDLINE | ID: mdl-37024500

RESUMO

We present the Canadian Open Neuroscience Platform (CONP) portal to answer the research community's need for flexible data sharing resources and provide advanced tools for search and processing infrastructure capacity. This portal differs from previous data sharing projects as it integrates datasets originating from a number of already existing platforms or databases through DataLad, a file level data integrity and access layer. The portal is also an entry point for searching and accessing a large number of standardized and containerized software and links to a computing infrastructure. It leverages community standards to help document and facilitate reuse of both datasets and tools, and already shows a growing community adoption giving access to more than 60 neuroscience datasets and over 70 tools. The CONP portal demonstrates the feasibility and offers a model of a distributed data and tool management system across 17 institutions throughout Canada.


Assuntos
Bases de Dados Factuais , Software , Canadá , Disseminação de Informação
11.
Stud Health Technol Inform ; 175: 81-90, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22941991

RESUMO

Production operation of large distributed computing infrastructures (DCI) still requires a lot of human intervention to reach acceptable quality of service. This may be achievable for scientific communities with solid IT support, but it remains a show-stopper for others. Some application execution environments are used to hide runtime technical issues from end users. But they mostly aim at fault-tolerance rather than incident resolution, and their operation still requires substantial manpower. A longer-term support activity is thus needed to ensure sustained quality of service for Virtual Organisations (VO). This paper describes how the biomed VO has addressed this challenge by setting up a technical support team. Its organisation, tooling, daily tasks, and procedures are described. Results are shown in terms of resource usage by end users, amount of reported incidents, and developed software tools. Based on our experience, we suggest ways to measure the impact of the technical support, perspectives to decrease its human cost and make it more community-specific.


Assuntos
Disciplinas das Ciências Biológicas , Internet/organização & administração , Manutenção/organização & administração , Informática Médica/organização & administração , Interface Usuário-Computador
12.
Gigascience ; 10(6)2021 06 03.
Artigo em Inglês | MEDLINE | ID: mdl-34080631

RESUMO

BACKGROUND: Software containers greatly facilitate the deployment and reproducibility of scientific data analyses in various platforms. However, container images often contain outdated or unnecessary software packages, which increases the number of security vulnerabilities in the images, widens the attack surface in the container host, and creates substantial security risks for computing infrastructures at large. This article presents a vulnerability analysis of container images for scientific data analysis. We compare results obtained with 4 vulnerability scanners, focusing on the use case of neuroscience data analysis, and quantifying the effect of image update and minification on the number of vulnerabilities. RESULTS: We find that container images used for neuroscience data analysis contain hundreds of vulnerabilities, that software updates remove roughly two-thirds of these vulnerabilities, and that removing unused packages is also effective. CONCLUSIONS: We provide recommendations on how to build container images with fewer vulnerabilities.


Assuntos
Análise de Dados , Software , Reprodutibilidade dos Testes
13.
PLoS One ; 16(11): e0250755, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34724000

RESUMO

The analysis of brain-imaging data requires complex processing pipelines to support findings on brain function or pathologies. Recent work has shown that variability in analytical decisions, small amounts of noise, or computational environments can lead to substantial differences in the results, endangering the trust in conclusions. We explored the instability of results by instrumenting a structural connectome estimation pipeline with Monte Carlo Arithmetic to introduce random noise throughout. We evaluated the reliability of the connectomes, the robustness of their features, and the eventual impact on analysis. The stability of results was found to range from perfectly stable (i.e. all digits of data significant) to highly unstable (i.e. 0 - 1 significant digits). This paper highlights the potential of leveraging induced variance in estimates of brain connectivity to reduce the bias in networks without compromising reliability, alongside increasing the robustness and potential upper-bound of their applications in the classification of individual differences. We demonstrate that stability evaluations are necessary for understanding error inherent to brain imaging experiments, and how numerical analysis can be applied to typical analytical workflows both in brain imaging and other domains of computational sciences, as the techniques used were data and context agnostic and globally relevant. Overall, while the extreme variability in results due to analytical instabilities could severely hamper our understanding of brain organization, it also affords us the opportunity to increase the robustness of findings.


Assuntos
Encéfalo/fisiologia , Conectoma , Modelos Neurológicos , Rede Nervosa/fisiologia , Humanos , Incerteza
14.
Front Psychiatry ; 12: 746477, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34975566

RESUMO

The value of understanding patients' illness experience and social contexts for advancing medicine and clinical care is widely acknowledged. However, methodologies for rigorous and inclusive data gathering and integrative analysis of biomedical, cultural, and social factors are limited. In this paper, we propose a digital strategy for large-scale qualitative health research, using play (as a state of being, a communication mode or context, and a set of imaginative, expressive, and game-like activities) as a research method for recursive learning and action planning. Our proposal builds on Gregory Bateson's cybernetic approach to knowledge production. Using chronic pain as an example, we show how pragmatic, structural and cultural constraints that define the relationship of patients to the healthcare system can give rise to conflicted messaging that impedes inclusive health research. We then review existing literature to illustrate how different types of play including games, chatbots, virtual worlds, and creative art making can contribute to research in chronic pain. Inspired by Frederick Steier's application of Bateson's theory to designing a science museum, we propose DiSPORA (Digital Strategy for Play-Oriented Research and Action), a virtual citizen science laboratory which provides a framework for delivering health information, tools for play-based experimentation, and data collection capacity, but is flexible in allowing participants to choose the mode and the extent of their interaction. Combined with other data management platforms used in epidemiological studies of neuropsychiatric illness, DiSPORA offers a tool for large-scale qualitative research, digital phenotyping, and advancing personalized medicine.

15.
Elife ; 102021 08 25.
Artigo em Inglês | MEDLINE | ID: mdl-34431476

RESUMO

Neuroimaging stands to benefit from emerging ultrahigh-resolution 3D histological atlases of the human brain; the first of which is 'BigBrain'. Here, we review recent methodological advances for the integration of BigBrain with multi-modal neuroimaging and introduce a toolbox, 'BigBrainWarp', that combines these developments. The aim of BigBrainWarp is to simplify workflows and support the adoption of best practices. This is accomplished with a simple wrapper function that allows users to easily map data between BigBrain and standard MRI spaces. The function automatically pulls specialised transformation procedures, based on ongoing research from a wide collaborative network of researchers. Additionally, the toolbox improves accessibility of histological information through dissemination of ready-to-use cytoarchitectural features. Finally, we demonstrate the utility of BigBrainWarp with three tutorials and discuss the potential of the toolbox to support multi-scale investigations of brain organisation.


Assuntos
Encéfalo/diagnóstico por imagem , Imageamento Tridimensional/métodos , Neuroimagem/métodos , Software , Idoso , Atlas como Assunto , Humanos , Imageamento por Ressonância Magnética , Masculino
16.
Gigascience ; 10(8)2021 08 20.
Artigo em Inglês | MEDLINE | ID: mdl-34414422

RESUMO

As the global health crisis unfolded, many academic conferences moved online in 2020. This move has been hailed as a positive step towards inclusivity in its attenuation of economic, physical, and legal barriers and effectively enabled many individuals from groups that have traditionally been underrepresented to join and participate. A number of studies have outlined how moving online made it possible to gather a more global community and has increased opportunities for individuals with various constraints, e.g., caregiving responsibilities. Yet, the mere existence of online conferences is no guarantee that everyone can attend and participate meaningfully. In fact, many elements of an online conference are still significant barriers to truly diverse participation: the tools used can be inaccessible for some individuals; the scheduling choices can favour some geographical locations; the set-up of the conference can provide more visibility to well-established researchers and reduce opportunities for early-career researchers. While acknowledging the benefits of an online setting, especially for individuals who have traditionally been underrepresented or excluded, we recognize that fostering social justice requires inclusivity to actively be centered in every aspect of online conference design. Here, we draw from the literature and from our own experiences to identify practices that purposefully encourage a diverse community to attend, participate in, and lead online conferences. Reflecting on how to design more inclusive online events is especially important as multiple scientific organizations have announced that they will continue offering an online version of their event when in-person conferences can resume.

17.
Stud Health Technol Inform ; 159: 203-14, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20543439

RESUMO

This paper studies the optimization of Mean-Shift (MS) image filtering scale parameters. A parameter sweep experiment representing 164 days of CPU is performed on the EGEE grid. The mathematical foundations of Mean-Shift and the grid environment used for the deployment are described in details. The experiments and results are then discussed highlighting the efficiency of gradient ascent algorithm for MS parameters optimization and a number of grid observations related to data transfers, reliability, task scheduling, CPU time and usability.


Assuntos
Redes de Comunicação de Computadores/organização & administração , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos
18.
Gigascience ; 9(12)2020 12 02.
Artigo em Inglês | MEDLINE | ID: mdl-33269388

RESUMO

BACKGROUND: Data analysis pipelines are known to be affected by computational conditions, presumably owing to the creation and propagation of numerical errors. While this process could play a major role in the current reproducibility crisis, the precise causes of such instabilities and the path along which they propagate in pipelines are unclear. METHOD: We present Spot, a tool to identify which processes in a pipeline create numerical differences when executed in different computational conditions. Spot leverages system-call interception through ReproZip to reconstruct and compare provenance graphs without pipeline instrumentation. RESULTS: By applying Spot to the structural pre-processing pipelines of the Human Connectome Project, we found that linear and non-linear registration are the cause of most numerical instabilities in these pipelines, which confirms previous findings.


Assuntos
Conectoma , Análise de Dados , Humanos , Reprodutibilidade dos Testes
19.
Front Neuroinform ; 14: 33, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32848689

RESUMO

The Tomographic Quantitative Electroencephalography (qEEGt) toolbox is integrated with the Montreal Neurological Institute (MNI) Neuroinformatics Ecosystem as a docker into the Canadian Brain Imaging Research Platform (CBRAIN). qEEGt produces age-corrected normative Statistical Parametric Maps of EEG log source spectra testing compliance to a normative database. This toolbox was developed at the Cuban Neuroscience Center as part of the first wave of the Cuban Human Brain Mapping Project (CHBMP) and has been validated and used in different health systems for several decades. Incorporation into the MNI ecosystem now provides CBRAIN registered users access to its full functionality and is accompanied by a public release of the source code on GitHub and Zenodo repositories. Among other features are the calculation of EEG scalp spectra, and the estimation of their source spectra using the Variable Resolution Electrical Tomography (VARETA) source imaging. Crucially, this is completed by the evaluation of z spectra by means of the built-in age regression equations obtained from the CHBMP database (ages 5-87) to provide normative Statistical Parametric Mapping of EEG log source spectra. Different scalp and source visualization tools are also provided for evaluation of individual subjects prior to further post-processing. Openly releasing this software in the CBRAIN platform will facilitate the use of standardized qEEGt methods in different research and clinical settings. An updated precis of the methods is provided in Appendix I as a reference for the toolbox. qEEGt/CBRAIN is the first installment of instruments developed by the neuroinformatic platform of the Cuba-Canada-China (CCC) project.

20.
Stud Health Technol Inform ; 147: 62-71, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-19593045

RESUMO

Image analysis has been strongly present in several healthgrid initiatives from the start, and today we find many imaging projects with successful grid implementations and developments. An example is the analysis of functional MRI data on grids, which has been successfully realized by several projects and that could be of interest for others. However, crossing the borders of existing grids is not trivial because the infrastructures being created for these projects differ, each adopting a (slightly) different software stack. This paper describes our early attempts to cross the borders between the German and Dutch grid infrastructures for medical imaging, motivated by a true wish to share expertise about fMRI analysis on grids between these two communities. We describe how we used off-the-shelf, production-level, grid technology to implement supporting mechanisms for cooperation in fMRI at several levels (users, data, software, workflows and computing resources). This simple exercise provided us valuable insights into the problems of crossing the borders of real grids from a user's perspective. Besides technical aspects, we observed that security and usability are very important for the success of inter-operation of Healthgrid.


Assuntos
Sistemas de Gerenciamento de Base de Dados/organização & administração , Diagnóstico por Imagem , Simulação por Computador , Alemanha , Humanos , Interpretação de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Países Baixos , Interface Usuário-Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA