RESUMO
Since several years, neuroscience research started to focus on multimodal approaches. One such multimodal approach is the combination of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI). However, no standard integration procedure has been established so far. One promising data-driven approach consists of a joint decomposition of event-related potentials (ERPs) and fMRI maps derived from the response to a particular stimulus. Such an algorithm (joint independent component analysis or JointICA) has recently been proposed by Calhoun et al. (2006). This method provides sources with both a fine spatial and temporal resolution, and has shown to provide meaningful results. However, the algorithm's performance has not been fully characterized yet, and no procedure has been proposed to assess the quality of the decomposition. In this paper, we therefore try to answer why and how JointICA works. We show the performance of the algorithm on data obtained in a visual detection task, and compare the performance for EEG recorded simultaneously with fMRI data and for EEG recorded in a separate session (outside the scanner room). We perform several analyses in order to set the necessary conditions that lead to a sound decomposition, and to give additional insights for exploration in future studies. In that respect, we show how the algorithm behaves when different EEG electrodes are used and we test the robustness with respect to the number of subjects in the study. The performance of the algorithm in all the experiments is validated based on results from previous studies.