Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
Neuroimage ; 179: 604-619, 2018 10 01.
Article in English | MEDLINE | ID: mdl-29964187

ABSTRACT

A recently introduced hierarchical generative model unified the inference of effective connectivity in individual subjects and the unsupervised identification of subgroups defined by connectivity patterns. This hierarchical unsupervised generative embedding (HUGE) approach combined a hierarchical formulation of dynamic causal modelling (DCM) for fMRI with Gaussian mixture models and relied on Markov chain Monte Carlo (MCMC) sampling for inference. While well suited for the inversion of complex hierarchical models, MCMC-based sampling suffers from a computational burden that is prohibitive for many applications. To address this problem, this paper derives an efficient variational Bayesian (VB) inversion scheme for HUGE that simultaneously provides approximations to the posterior distribution over model parameters and to the log model evidence. The face validity of the VB scheme was tested using two synthetic fMRI datasets with known ground truth. Additionally, an empirical fMRI dataset of stroke patients and healthy controls was used to evaluate the practical utility of the method in application to real-world problems. Our analyses demonstrate good performance of our VB scheme, with a marked speed-up of model inversion by two orders of magnitude compared to MCMC, while maintaining a similar level of accuracy. Notably, additional acceleration would be possible if parallel computing techniques were applied. Generally, our VB implementation of HUGE is fast enough to support multi-start procedures for whole-group analyses, a useful strategy to ameliorate problems with local extrema. HUGE thus represents a potentially useful practical solution for an important problem in clinical neuromodeling and computational psychiatry, i.e., the unsupervised detection of subgroups in heterogeneous populations that are defined by effective connectivity.


Subject(s)
Algorithms , Brain Mapping/methods , Image Processing, Computer-Assisted/methods , Models, Neurological , Adult , Aged , Bayes Theorem , Datasets as Topic , Female , Humans , Magnetic Resonance Imaging/methods , Male , Middle Aged
2.
Neuroimage ; 125: 556-570, 2016 Jan 15.
Article in English | MEDLINE | ID: mdl-26484827

ABSTRACT

High-resolution blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) at the sub-millimeter scale has become feasible with recent advances in MR technology. In principle, this would enable the study of layered cortical circuits, one of the fundaments of cortical computation. However, the spatial layout of cortical blood supply may become an important confound at such high resolution. In particular, venous blood draining back to the cortical surface perpendicularly to the layered structure is expected to influence the measured responses in different layers. Here, we present an extension of a hemodynamic model commonly used for analyzing fMRI data (in dynamic causal models or biophysical network models) that accounts for such blood draining effects by coupling local hemodynamics across layers. We illustrate the properties of the model and its inversion by a series of simulations and show that it successfully captures layered fMRI data obtained during a simple visual experiment. We conclude that for future studies of the dynamics of layered neuronal circuits with high-resolution fMRI, it will be pivotal to include effects of blood draining, particularly when trying to infer on the layer-specific connections in cortex--a theme of key relevance for brain disorders like schizophrenia and for theories of brain function such as predictive coding.


Subject(s)
Brain Mapping/methods , Brain/blood supply , Hemodynamics/physiology , Magnetic Resonance Imaging/methods , Models, Neurological , Algorithms , Humans , Image Processing, Computer-Assisted/methods , Models, Theoretical , Oxygen/blood
3.
Cogn Neurodyn ; 16(1): 1-15, 2022 Feb.
Article in English | MEDLINE | ID: mdl-35116083

ABSTRACT

In generative modeling of neuroimaging data, such as dynamic causal modeling (DCM), one typically considers several alternative models, either to determine the most plausible explanation for observed data (Bayesian model selection) or to account for model uncertainty (Bayesian model averaging). Both procedures rest on estimates of the model evidence, a principled trade-off between model accuracy and complexity. In the context of DCM, the log evidence is usually approximated using variational Bayes. Although this approach is highly efficient, it makes distributional assumptions and is vulnerable to local extrema. This paper introduces the use of thermodynamic integration (TI) for Bayesian model selection and averaging in the context of DCM. TI is based on Markov chain Monte Carlo sampling which is asymptotically exact but orders of magnitude slower than variational Bayes. In this paper, we explain the theoretical foundations of TI, covering key concepts such as the free energy and its origins in statistical physics. Our aim is to convey an in-depth understanding of the method starting from its historical origin in statistical physics. In addition, we demonstrate the practical application of TI via a series of examples which serve to guide the user in applying this method. Furthermore, these examples demonstrate that, given an efficient implementation and hardware capable of parallel processing, the challenge of high computational demand can be overcome successfully. The TI implementation presented in this paper is freely available as part of the open source software TAPAS. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s11571-021-09696-9.

4.
Front Psychiatry ; 12: 680811, 2021.
Article in English | MEDLINE | ID: mdl-34149484

ABSTRACT

Psychiatry faces fundamental challenges with regard to mechanistically guided differential diagnosis, as well as prediction of clinical trajectories and treatment response of individual patients. This has motivated the genesis of two closely intertwined fields: (i) Translational Neuromodeling (TN), which develops "computational assays" for inferring patient-specific disease processes from neuroimaging, electrophysiological, and behavioral data; and (ii) Computational Psychiatry (CP), with the goal of incorporating computational assays into clinical decision making in everyday practice. In order to serve as objective and reliable tools for clinical routine, computational assays require end-to-end pipelines from raw data (input) to clinically useful information (output). While these are yet to be established in clinical practice, individual components of this general end-to-end pipeline are being developed and made openly available for community use. In this paper, we present the Translational Algorithms for Psychiatry-Advancing Science (TAPAS) software package, an open-source collection of building blocks for computational assays in psychiatry. Collectively, the tools in TAPAS presently cover several important aspects of the desired end-to-end pipeline, including: (i) tailored experimental designs and optimization of measurement strategy prior to data acquisition, (ii) quality control during data acquisition, and (iii) artifact correction, statistical inference, and clinical application after data acquisition. Here, we review the different tools within TAPAS and illustrate how these may help provide a deeper understanding of neural and cognitive mechanisms of disease, with the ultimate goal of establishing automatized pipelines for predictions about individual patients. We hope that the openly available tools in TAPAS will contribute to the further development of TN/CP and facilitate the translation of advances in computational neuroscience into clinically relevant computational assays.

5.
BMC Bioinformatics ; 11 Suppl 8: S8, 2010 Oct 26.
Article in English | MEDLINE | ID: mdl-21034433

ABSTRACT

BACKGROUND: We present an infinite mixture-of-experts model to find an unknown number of sub-groups within a given patient cohort based on survival analysis. The effect of patient features on survival is modeled using the Cox's proportionality hazards model which yields a non-standard regression component. The model is able to find key explanatory factors (chosen from main effects and higher-order interactions) for each sub-group by enforcing sparsity on the regression coefficients via the Bayesian Group-Lasso. RESULTS: Simulated examples justify the need of such an elaborate framework for identifying sub-groups along with their key characteristics versus other simpler models. When applied to a breast-cancer dataset consisting of survival times and protein expression levels of patients, it results in identifying two distinct sub-groups with different survival patterns (low-risk and high-risk) along with the respective sets of compound markers. CONCLUSIONS: The unified framework presented here, combining elements of cluster and feature detection for survival analysis, is clearly a powerful tool for analyzing survival patterns within a patient group. The model also demonstrates the feasibility of analyzing complex interactions which can contribute to definition of novel prognostic compound markers.


Subject(s)
Breast Neoplasms/mortality , Models, Statistical , Regression Analysis , Bayes Theorem , Breast Neoplasms/diagnosis , Cluster Analysis , Cohort Studies , Computer Simulation , Databases, Factual , Female , Humans , Kaplan-Meier Estimate , Markov Chains , Monte Carlo Method , Prognosis , Proportional Hazards Models , Reproducibility of Results
6.
J Neurosci Methods ; 269: 6-20, 2016 08 30.
Article in English | MEDLINE | ID: mdl-27141854

ABSTRACT

BACKGROUND: Generative models of neuroimaging data, such as dynamic causal models (DCMs), are commonly used for inferring effective connectivity from individual subject data. Recently introduced "generative embedding" approaches have used DCM-based connectivity parameters for supervised classification of individual patients or to find unknown subgroups in heterogeneous groups using unsupervised clustering methods. NEW METHOD: We present a novel framework which combines DCMs with finite mixture models into a single hierarchical model. This approach unifies the inference of connectivity parameters in individual subjects with inference on population structure, i.e. the existence of subgroups defined by model parameters, and allows for empirical Bayesian estimates of a subject's connectivity based on subgroup-specific prior distributions. We introduce a Markov chain Monte Carlo sampling method for inversion of this hierarchical generative model. RESULTS: This paper formally introduces the idea behind our novel concept and demonstrates the face validity of the model in application to both simulated data as well as an empirical fMRI dataset from healthy controls and patients with schizophrenia. COMPARISON WITH EXISTING METHOD(S): The analysis of our empirical fMRI data demonstrates that our approach results in superior model evidence than the conventional non-hierarchical inversion of DCMs. CONCLUSIONS: In this paper, we have presented a novel unified framework to jointly infer the effective connectivity parameters in DCMs for multiple subjects and, at the same time, discover connectivity-defined cluster structure of the whole population, using a mixture model approach.


Subject(s)
Models, Neurological , Models, Statistical , Neuroimaging/methods , Software , Unsupervised Machine Learning , Adult , Bayes Theorem , Brain/diagnostic imaging , Brain/physiopathology , Cluster Analysis , Computer Simulation , Female , Humans , Magnetic Resonance Imaging/methods , Male , Markov Chains , Monte Carlo Method , Reproducibility of Results , Schizophrenia/classification , Schizophrenia/diagnostic imaging , Schizophrenia/physiopathology
7.
J Neurosci Methods ; 257: 7-16, 2016 Jan 15.
Article in English | MEDLINE | ID: mdl-26384541

ABSTRACT

BACKGROUND: Dynamic causal modeling (DCM) for fMRI is an established method for Bayesian system identification and inference on effective brain connectivity. DCM relies on a biophysical model that links hidden neuronal activity to measurable BOLD signals. Currently, biophysical simulations from DCM constitute a serious computational hindrance. Here, we present Massively Parallel Dynamic Causal Modeling (mpdcm), a toolbox designed to address this bottleneck. NEW METHOD: mpdcm delegates the generation of simulations from DCM's biophysical model to graphical processing units (GPUs). Simulations are generated in parallel by implementing a low storage explicit Runge-Kutta's scheme on a GPU architecture. mpdcm is publicly available under the GPLv3 license. RESULTS: We found that mpdcm efficiently generates large number of simulations without compromising their accuracy. As applications of mpdcm, we suggest two computationally expensive sampling algorithms: thermodynamic integration and parallel tempering. COMPARISON WITH EXISTING METHOD(S): mpdcm is up to two orders of magnitude more efficient than the standard implementation in the software package SPM. Parallel tempering increases the mixing properties of the traditional Metropolis-Hastings algorithm at low computational cost given efficient, parallel simulations of a model. CONCLUSIONS: Future applications of DCM will likely require increasingly large computational resources, for example, when the likelihood landscape of a model is multimodal, or when implementing sampling methods for multi-subject analysis. Due to the wide availability of GPUs, algorithmic advances can be readily available in the absence of access to large computer grids, or when there is a lack of expertise to implement algorithms in such grids.


Subject(s)
Brain Mapping/methods , Computer Graphics , Magnetic Resonance Imaging/methods , Models, Statistical , Signal Processing, Computer-Assisted , Software , Access to Information , Algorithms , Bayes Theorem , Brain/physiology , Cerebrovascular Circulation/physiology , Computer Simulation , Models, Neurological , Oxygen/blood , Thermodynamics
SELECTION OF CITATIONS
SEARCH DETAIL