Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Hum Brain Mapp ; 43(4): 1231-1255, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-34806255

RESUMEN

Data fusion refers to the joint analysis of multiple datasets that provide different (e.g., complementary) views of the same task. In general, it can extract more information than separate analyses can. Jointly analyzing electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) measurements has been proved to be highly beneficial to the study of the brain function, mainly because these neuroimaging modalities have complementary spatiotemporal resolution: EEG offers good temporal resolution while fMRI is better in its spatial resolution. The EEG-fMRI fusion methods that have been reported so far ignore the underlying multiway nature of the data in at least one of the modalities and/or rely on very strong assumptions concerning the relation of the respective datasets. For example, in multisubject analysis, it is commonly assumed that the hemodynamic response function is a priori known for all subjects and/or the coupling across corresponding modes is assumed to be exact (hard). In this article, these two limitations are overcome by adopting tensor models for both modalities and by following soft and flexible coupling approaches to implement the multimodal fusion. The obtained results are compared against those of parallel independent component analysis and hard coupling alternatives, with both synthetic and real data (epilepsy and visual oddball paradigm). Our results demonstrate the clear advantage of using soft and flexible coupled tensor decompositions in scenarios that do not conform with the hard coupling assumption.


Asunto(s)
Encéfalo , Electroencefalografía/métodos , Neuroimagen Funcional/métodos , Imagen por Resonancia Magnética/métodos , Red Nerviosa , Adulto , Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Epilepsia/diagnóstico por imagen , Femenino , Humanos , Masculino , Modelos Teóricos , Imagen Multimodal , Red Nerviosa/diagnóstico por imagen , Red Nerviosa/fisiología , Adulto Joven
2.
Med Image Anal ; 84: 102706, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36516557

RESUMEN

Convolutional Neural Networks (CNNs) with U-shaped architectures have dominated medical image segmentation, which is crucial for various clinical purposes. However, the inherent locality of convolution makes CNNs fail to fully exploit global context, essential for better recognition of some structures, e.g., brain lesions. Transformers have recently proven promising performance on vision tasks, including semantic segmentation, mainly due to their capability of modeling long-range dependencies. Nevertheless, the quadratic complexity of attention makes existing Transformer-based models use self-attention layers only after somehow reducing the image resolution, which limits the ability to capture global contexts present at higher resolutions. Therefore, this work introduces a family of models, dubbed Factorizer, which leverages the power of low-rank matrix factorization for constructing an end-to-end segmentation model. Specifically, we propose a linearly scalable approach to context modeling, formulating Nonnegative Matrix Factorization (NMF) as a differentiable layer integrated into a U-shaped architecture. The shifted window technique is also utilized in combination with NMF to effectively aggregate local information. Factorizers compete favorably with CNNs and Transformers in terms of accuracy, scalability, and interpretability, achieving state-of-the-art results on the BraTS dataset for brain tumor segmentation and ISLES'22 dataset for stroke lesion segmentation. Highly meaningful NMF components give an additional interpretability advantage to Factorizers over CNNs and Transformers. Moreover, our ablation studies reveal a distinctive feature of Factorizers that enables a significant speed-up in inference for a trained Factorizer without any extra steps and without sacrificing much accuracy. The code and models are publicly available at https://github.com/pashtari/factorizer.


Asunto(s)
Neoplasias Encefálicas , Accidente Cerebrovascular , Humanos , Algoritmos , Neoplasias Encefálicas/diagnóstico por imagen , Redes Neurales de la Computación , Semántica , Procesamiento de Imagen Asistido por Computador
3.
Front Big Data ; 5: 688496, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35224482

RESUMEN

We introduce a supervised learning framework for target functions that are well approximated by a sum of (few) separable terms. The framework proposes to approximate each component function by a B-spline, resulting in an approximant where the underlying coefficient tensor of the tensor product expansion has a low-rank polyadic decomposition parametrization. By exploiting the multilinear structure, as well as the sparsity pattern of the compactly supported B-spline basis terms, we demonstrate how such an approximant is well-suited for regression and classification tasks by using the Gauss-Newton algorithm to train the parameters. Various numerical examples are provided analyzing the effectiveness of the approach.

4.
IEEE Trans Biomed Eng ; 66(2): 584-594, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-29993479

RESUMEN

OBJECTIVE: Magnetic resonance spectroscopic imaging (MRSI) signals are often corrupted by residual water and artifacts. Residual water suppression plays an important role in accurate and efficient quantification of metabolites from MRSI. A tensor-based method for suppressing residual water is proposed. METHODS: A third-order tensor is constructed by stacking the Löwner matrices corresponding to each MRSI voxel spectrum along the third mode. A canonical polyadic decomposition is applied on the tensor to extract the water component and to, subsequently, remove it from the original MRSI signals. RESULTS: The proposed method applied on both simulated and in-vivo MRSI signals showed good water suppression performance. CONCLUSION: The tensor-based Löwner method has better performance in suppressing residual water in MRSI signals as compared to the widely used subspace-based Hankel singular value decomposition method. SIGNIFICANCE: A tensor method suppresses residual water simultaneously from all the voxels in the MRSI grid and helps in preventing the failure of the water suppression in single voxels.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Procesamiento de Señales Asistido por Computador , Algoritmos , Artefactos , Encéfalo/diagnóstico por imagen , Neoplasias Encefálicas/diagnóstico por imagen , Humanos , Agua/química
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2017: 438-441, 2017 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-29059904

RESUMEN

Cardiac arrhythmia or irregular heartbeats are an important feature to assess the risk on sudden cardiac death and other cardiac disorders. Automatic classification of irregular heartbeats is therefore an important part of ECG analysis. We propose a tensor-based method for single- and multi-channel irregular heartbeat classification. The method tensorizes the ECG data matrix by segmenting each signal beat-by-beat and then stacking the result into a third-order tensor with dimensions channel × time × heartbeat. We use the multilinear singular value decomposition to model the obtained tensor. Next, we formulate the classification task as the computation of a Kronecker Product Equation. We apply our method on the INCART dataset, illustrating promising results.


Asunto(s)
Arritmias Cardíacas , Electrocardiografía , Frecuencia Cardíaca , Humanos , Procesamiento de Señales Asistido por Computador
6.
PLoS One ; 7(5): e37840, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22693578

RESUMEN

BACKGROUND: In systems biology it is common to obtain for the same set of biological entities information from multiple sources. Examples include expression data for the same set of orthologous genes screened in different organisms and data on the same set of culture samples obtained with different high-throughput techniques. A major challenge is to find the important biological processes underlying the data and to disentangle therein processes common to all data sources and processes distinctive for a specific source. Recently, two promising simultaneous data integration methods have been proposed to attain this goal, namely generalized singular value decomposition (GSVD) and simultaneous component analysis with rotation to common and distinctive components (DISCO-SCA). RESULTS: Both theoretical analyses and applications to biologically relevant data show that: (1) straightforward applications of GSVD yield unsatisfactory results, (2) DISCO-SCA performs well, (3) provided proper pre-processing and algorithmic adaptations, GSVD reaches a performance level similar to that of DISCO-SCA, and (4) DISCO-SCA is directly generalizable to more than two data sources. The biological relevance of DISCO-SCA is illustrated with two applications. First, in a setting of comparative genomics, it is shown that DISCO-SCA recovers a common theme of cell cycle progression and a yeast-specific response to pheromones. The biological annotation was obtained by applying Gene Set Enrichment Analysis in an appropriate way. Second, in an application of DISCO-SCA to metabolomics data for Escherichia coli obtained with two different chemical analysis platforms, it is illustrated that the metabolites involved in some of the biological processes underlying the data are detected by one of the two platforms only; therefore, platforms for microbial metabolomics should be tailored to the biological question. CONCLUSIONS: Both DISCO-SCA and properly applied GSVD are promising integrative methods for finding common and distinctive processes in multisource data. Open source code for both methods is provided.


Asunto(s)
Biología Computacional/métodos , Estadística como Asunto/métodos , Escherichia coli/metabolismo , Perfilación de la Expresión Génica , Genómica , Metabolómica , Saccharomyces cerevisiae/genética
7.
Neural Netw ; 24(8): 861-74, 2011 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-21703821

RESUMEN

Tensor-based techniques for learning allow one to exploit the structure of carefully chosen representations of data. This is a desirable feature in particular when the number of training patterns is small which is often the case in areas such as biosignal processing and chemometrics. However, the class of tensor-based models is somewhat restricted and might suffer from limited discriminative power. On a different track, kernel methods lead to flexible nonlinear models that have been proven successful in many different contexts. Nonetheless, a naïve application of kernel methods does not exploit structural properties possessed by the given tensorial representations. The goal of this work is to go beyond this limitation by introducing non-parametric tensor-based models. The proposed framework aims at improving the discriminative power of supervised tensor-based models while still exploiting the structural information embodied in the data. We begin by introducing a feature space formed by multilinear functionals. The latter can be considered as the infinite dimensional analogue of tensors. Successively we show how to implicitly map input patterns in such a feature space by means of kernels that exploit the algebraic structure of data tensors. The proposed tensorial kernel links to the MLSVD and features an interesting invariance property; the approach leads to convex optimization and fits into the same primal-dual framework underlying SVM-like algorithms.


Asunto(s)
Inteligencia Artificial , Interpretación Estadística de Datos , Área Bajo la Curva , Bases de Datos Factuales , Análisis de Elementos Finitos , Humanos , Modelos Estadísticos , Modelos Teóricos , Distribución Normal , Reproducibilidad de los Resultados , Lengua de Signos , Procesamiento de Señales Asistido por Computador , Programas Informáticos , Máquina de Vectores de Soporte
8.
Comput Intell Neurosci ; : 58253, 2007.
Artículo en Inglés | MEDLINE | ID: mdl-18301715

RESUMEN

Long-term electroencephalographic (EEG) recordings are important in the presurgical evaluation of refractory partial epilepsy for the delineation of the ictal onset zones. In this paper, we introduce a new concept for an automatic, fast, and objective localisation of the ictal onset zone in ictal EEG recordings. Canonical decomposition of ictal EEG decomposes the EEG in atoms. One or more atoms are related to the seizure activity. A single dipole was then fitted to model the potential distribution of each epileptic atom. In this study, we performed a simulation study in order to estimate the dipole localisation error. Ictal dipole localisation was very accurate, even at low signal-to-noise ratios, was not affected by seizure activity frequency or frequency changes, and was minimally affected by the waveform and depth of the ictal onset zone location. Ictal dipole localisation error using 21 electrodes was around 10.0 mm and improved more than tenfold in the range of 0.5-1.0 mm using 148 channels. In conclusion, our simulation study of canonical decomposition of ictal scalp EEG allowed a robust and accurate localisation of the ictal onset zone.

9.
Magn Reson Med ; 54(6): 1519-29, 2005 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-16276498

RESUMEN

In this article an accurate and efficient technique for tissue typing is presented. The proposed technique is based on Canonical Correlation Analysis, a statistical method able to simultaneously exploit the spectral and spatial information characterizing the Magnetic Resonance Spectroscopic Imaging (MRSI) data. Recently, Canonical Correlation Analysis has been successfully applied to other types of biomedical data, such as functional MRI data. Here, Canonical Correlation Analysis is adapted for MRSI data processing in order to retrieve in an accurate and efficient way the possible tissue types that characterize the organ under investigation. The potential and limitations of the new technique have been investigated by using simulated as well as in vivo prostate MRSI data, and extensive studies demonstrate a high accuracy, robustness, and efficiency. Moreover, the performance of Canonical Correlation Analysis has been compared to that of ordinary correlation analysis. The test results show that Canonical Correlation Analysis performs best in terms of accuracy and robustness.


Asunto(s)
Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Espectroscopía de Resonancia Magnética/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Neoplasias de la Próstata/patología , Algoritmos , Inteligencia Artificial , Humanos , Almacenamiento y Recuperación de la Información/métodos , Imagen por Resonancia Magnética/instrumentación , Espectroscopía de Resonancia Magnética/instrumentación , Masculino , Fantasmas de Imagen , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Estadística como Asunto
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA