Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
J Neural Eng ; 21(1)2024 01 29.
Artículo en Inglés | MEDLINE | ID: mdl-38215493

RESUMEN

Objective. Alzheimer's disease is a progressive neurodegenerative dementia that poses a significant global health threat. It is imperative and essential to detect patients in the mild cognitive impairment (MCI) stage or even earlier, enabling effective interventions to prevent further deterioration of dementia. This study focuses on the early prediction of dementia utilizing Magnetic Resonance Imaging (MRI) data, using the proposed Graph Convolutional Networks (GCNs).Approach. Specifically, we developed a functional connectivity (FC) based GCN framework for binary classifications using resting-state fMRI data. We explored different types and processing methods of FC and evaluated the performance on the OASIS-3 dataset. We developed the GCN model for two different purposes: (1) MCI diagnosis: classifying MCI from normal controls (NCs); and (2) dementia risk prediction: classifying NCs from subjects who have the potential for developing MCI but have not been clinically diagnosed as MCI.Main results. The results of the experiments revealed several important findings: First, the proposed GCN outperformed both the baseline GCN and Support Vector Machine (SVM). It achieved the best average accuracy of 80.3% (11.7% higher than the baseline GCN and 23.5% higher than SVM) and the highest accuracy of 91.2%. Secondly, the GCN framework with (absolute) individual FC performed slightly better than that with global FC generally. However, GCN using global graphs with appropriate connectivity can achieve equivalent or superior performance to individual graphs in some cases, which highlights the significance of suitable connectivity for achieving performance. Additionally, the results indicate that the self-network connectivity of specific brain network regions (such as default mode network, visual network, ventral attention network and somatomotor network) may play a more significant role in GCN classification.Significance. Overall, this study offers valuable insights into the application of GCNs in brain analysis and early diagnosis of dementia. This contributes significantly to the understanding of MCI and has substantial potential for clinical applications in early diagnosis and intervention for dementia and other neurodegenerative diseases. Our code for GCN implementation is available at:https://github.com/Shuning-Han/FC-based-GCN.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Demencia , Humanos , Imagen por Resonancia Magnética/métodos , Encéfalo , Disfunción Cognitiva/diagnóstico por imagen , Mapeo Encefálico/métodos , Demencia/diagnóstico por imagen , Enfermedad de Alzheimer/diagnóstico por imagen
2.
Sensors (Basel) ; 23(23)2023 Nov 24.
Artículo en Inglés | MEDLINE | ID: mdl-38067750

RESUMEN

Machine learning is an effective method for developing automatic algorithms for analysing sophisticated biomedical data [...].


Asunto(s)
Algoritmos , Aprendizaje Automático
3.
IEEE J Biomed Health Inform ; 27(8): 3867-3877, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37227915

RESUMEN

The classification of limb movements can provide with control commands in non-invasive brain-computer interface. Previous studies on the classification of limb movements have focused on the classification of left/right limbs; however, the classification of different types of upper limb movements has often been ignored despite that it provides more active-evoked control commands in the brain-computer interface. Nevertheless, few machine learning method can be used as the state-of-the-art method in the multi-class classification of limb movements. This work focuses on the multi-class classification of upper limb movements and proposes the multi-class filter bank task-related component analysis (mFBTRCA) method, which consists of three steps: spatial filtering, similarity measuring and filter bank selection. The spatial filter, namely the task-related component analysis, is first used to remove noise from EEG signals. The canonical correlation measures the similarity of the spatial-filtered signals and is used for feature extraction. The correlation features are extracted from multiple low-frequency filter banks. The minimum-redundancy maximum-relevance selects the essential features from all the correlation features, and finally, the support vector machine is used to classify the selected features. The proposed method compared against previously used models is evaluated using two datasets. mFBTRCA achieved a classification accuracy of 0.4193 ± 0.0780 (7 classes) and 0.4032 ± 0.0714 (5 classes), respectively, which improves on the best accuracies achieved using the compared methods (0.3590 ± 0.0645 and 0.3159 ± 0.0736, respectively). The proposed method is expected to provide more control commands in the applications of non-invasive brain-computer interfaces.


Asunto(s)
Interfaces Cerebro-Computador , Electroencefalografía , Humanos , Electroencefalografía/métodos , Procesamiento de Señales Asistido por Computador , Algoritmos , Extremidad Superior , Movimiento
4.
J Neural Eng ; 19(6)2022 11 16.
Artículo en Inglés | MEDLINE | ID: mdl-36317288

RESUMEN

Objective. Pre-movement decoding plays an important role in detecting the onsets of actions using low-frequency electroencephalography (EEG) signals before the movement of an upper limb. In this work, a binary classification method is proposed between two different states.Approach. The proposed method, referred to as filter bank standard task-related component analysis (FBTRCA), is to incorporate filter bank selection into the standard task-related component analysis (STRCA) method. In FBTRCA, the EEG signals are first divided into multiple sub-bands which start at specific fixed frequencies and end frequencies that follow in an arithmetic sequence. The STRCA method is then applied to the EEG signals in these bands to extract CCPs. The minimum redundancy maximum relevance feature selection method is used to select essential features from these correlation patterns in all sub-bands. Finally, the selected features are classified using the binary support vector machine classifier. A convolutional neural network (CNN) is an alternative approach to select canonical correlation patterns.Main Results. Three methods were evaluated using EEG signals in the time window from 2 s before the movement onset to 1 s after the movement onset. In the binary classification between a movement state and the resting state, the FBTRCA achieved an average accuracy of 0.8968 ± 0.0847 while the accuracies of STRCA and CNN were 0.8228 ± 0.1149 and 0.8828 ± 0.0917, respectively. In the binary classification between two actions, the accuracies of STRCA, CNN, and FBTRCA were 0.6611 ± 0.1432, 0.6993 ± 0.1271, 0.7178 ± 0.1274, respectively. Feature selection using filter banks, as in FBTRCA, produces comparable results to STRCA.Significance. The proposed method provides a way to select filter banks in pre-movement decoding, and thus it improves the classification performance. The improved pre-movement decoding of single upper limb movements is expected to provide people with severe motor disabilities with a more natural, non-invasive control of their external devices.


Asunto(s)
Interfaces Cerebro-Computador , Humanos , Electroencefalografía/métodos , Máquina de Vectores de Soporte , Movimiento , Extremidad Superior , Algoritmos , Imaginación
5.
Hum Brain Mapp ; 43(17): 5220-5234, 2022 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-35778791

RESUMEN

Understanding the laminar brain structure is of great help in further developing our knowledge of the functions of the brain. However, since most layer segmentation methods are invasive, it is difficult to apply them to the human brain in vivo. To systematically explore the human brain's laminar structure noninvasively, the K-means clustering algorithm was used to automatically segment the left hemisphere into two layers, the superficial and deep layers, using a 7 Tesla (T) diffusion magnetic resonance imaging (dMRI)open dataset. The obtained layer thickness was then compared with the layer thickness of the BigBrain reference dataset, which segmented the neocortex into six layers based on the von Economo atlas. The results show a significant correlation not only between our automatically segmented superficial layer thickness and the thickness of layers 1-3 from the reference histological data, but also between our automatically segmented deep layer thickness and the thickness of layers 4-6 from the reference histological data. Second, we constructed the laminar connections between two pairs of unidirectional connected regions, which is consistent with prior research. Finally, we conducted the laminar analysis of the working memory, which was challenging to do in the past, and explained the conclusions of the functional analysis. Our work successfully demonstrates that it is possible to segment the human cortex noninvasively into layers using dMRI data and further explores the mechanisms of the human brain.


Asunto(s)
Imagen por Resonancia Magnética , Memoria a Corto Plazo , Humanos , Imagen por Resonancia Magnética/métodos , Corteza Cerebral/diagnóstico por imagen , Imagen de Difusión por Resonancia Magnética , Encéfalo
6.
Front Neurosci ; 16: 866735, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35864986

RESUMEN

Gifted children and normal controls can be distinguished by analyzing the structural connectivity (SC) extracted from MRI data. Previous studies have improved classification accuracy by extracting several features of the brain regions. However, the limited size of the database may lead to degradation when training deep neural networks as classification models. To this end, we propose to use a data augmentation method by adding artificial samples generated using graph empirical mode decomposition (GEMD). We decompose the training samples by GEMD to obtain the intrinsic mode functions (IMFs). Then, the IMFs are randomly recombined to generate the new artificial samples. After that, we use the original training samples and the new artificial samples to enlarge the training set. To evaluate the proposed method, we use a deep neural network architecture called BrainNetCNN to classify the SCs of MRI data with and without data augmentation. The results show that the data augmentation with GEMD can improve the average classification performance from 55.7 to 78%, while we get a state-of-the-art classification accuracy of 93.3% by using GEMD in some cases. Our results demonstrate that the proposed GEMD augmentation method can effectively increase the limited number of samples in the gifted children dataset, improving the classification accuracy. We also found that the classification accuracy is improved when specific features extracted from brain regions are used, achieving 93.1% for some feature selection methods.

7.
Entropy (Basel) ; 23(9)2021 Sep 06.
Artículo en Inglés | MEDLINE | ID: mdl-34573795

RESUMEN

An electroencephalogram (EEG) is an electrophysiological signal reflecting the functional state of the brain. As the control signal of the brain-computer interface (BCI), EEG may build a bridge between humans and computers to improve the life quality for patients with movement disorders. The collected EEG signals are extremely susceptible to the contamination of electromyography (EMG) artifacts, affecting their original characteristics. Therefore, EEG denoising is an essential preprocessing step in any BCI system. Previous studies have confirmed that the combination of ensemble empirical mode decomposition (EEMD) and canonical correlation analysis (CCA) can effectively suppress EMG artifacts. However, the time-consuming iterative process of EEMD may limit the application of the EEMD-CCA method in real-time monitoring of BCI. Compared with the existing EEMD, the recently proposed signal serialization based EEMD (sEEMD) is a good choice to provide effective signal analysis and fast mode decomposition. In this study, an EMG denoising method based on sEEMD and CCA is discussed. All of the analyses are carried out on semi-simulated data. The results show that, in terms of frequency and amplitude, the intrinsic mode functions (IMFs) decomposed by sEEMD are consistent with the IMFs obtained by EEMD. There is no significant difference in the ability to separate EMG artifacts from EEG signals between the sEEMD-CCA method and the EEMD-CCA method (p > 0.05). Even in the case of heavy contamination (signal-to-noise ratio is less than 2 dB), the relative root mean squared error is about 0.3, and the average correlation coefficient remains above 0.9. The running speed of the sEEMD-CCA method to remove EMG artifacts is significantly improved in comparison with that of EEMD-CCA method (p < 0.05). The running time of the sEEMD-CCA method for three lengths of semi-simulated data is shortened by more than 50%. This indicates that sEEMD-CCA is a promising tool for EMG artifact removal in real-time BCI systems.

8.
Brain Struct Funct ; 224(8): 2631-2660, 2019 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-31342157

RESUMEN

Historically, the primary focus of studies of human white matter tracts has been on large tracts that connect anterior-to-posterior cortical regions. These include the superior longitudinal fasciculus (SLF), the inferior longitudinal fasciculus (ILF), and the inferior fronto-occipital fasciculus (IFOF). Recently, more refined and well-understood tractography methods have facilitated the characterization of several tracts in the posterior of the human brain that connect dorsal-to-ventral cortical regions. These include the vertical occipital fasciculus (VOF), the posterior arcuate fasciculus (pArc), the temporo-parietal connection (TP-SPL), and the middle longitudinal fasciculus (MdLF). The addition of these dorso-ventral connective tracts to our standard picture of white matter architecture results in a more complicated pattern of white matter connectivity than previously considered. Dorso-ventral connective tracts may play a role in transferring information from superior horizontal tracts, such as the SLF, to inferior horizontal tracts, such as the IFOF and ILF. We present a full anatomical delineation of these major dorso-ventral connective white matter tracts (the VOF, pArc, TP-SPL, and MdLF). We show their spatial layout and cortical termination mappings in relation to the more established horizontal tracts (SLF, IFOF, ILF, and Arc) and consider standard values for quantitative features associated with the aforementioned tracts. We hope to facilitate further study on these tracts and their relations. To this end, we also share links to automated code that segments these tracts, thereby providing a standard approach to obtaining these tracts for subsequent analysis. We developed open source software to allow reproducible segmentation of the tracts: https://github.com/brainlife/Vertical_Tracts . Finally, we make the segmentation method available as an open cloud service on the data and analyses sharing platform brainlife.io. Investigators will be able to access these services and upload their data to segment these tracts.


Asunto(s)
Encéfalo/anatomía & histología , Sustancia Blanca/anatomía & histología , Encéfalo/diagnóstico por imagen , Imagen de Difusión por Resonancia Magnética , Humanos , Procesamiento de Imagen Asistido por Computador , Masculino , Vías Nerviosas/anatomía & histología , Vías Nerviosas/diagnóstico por imagen , Programas Informáticos , Sustancia Blanca/diagnóstico por imagen
9.
Sci Data ; 6(1): 69, 2019 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-31123325

RESUMEN

We describe the Open Diffusion Data Derivatives (O3D) repository: an integrated collection of preserved brain data derivatives and processing pipelines, published together using a single digital-object-identifier. The data derivatives were generated using modern diffusion-weighted magnetic resonance imaging data (dMRI) with diverse properties of resolution and signal-to-noise ratio. In addition to the data, we publish all processing pipelines (also referred to as open cloud services). The pipelines utilize modern methods for neuroimaging data processing (diffusion-signal modelling, fiber tracking, tractography evaluation, white matter segmentation, and structural connectome construction). The O3D open services can allow cognitive and clinical neuroscientists to run the connectome mapping algorithms on new, user-uploaded, data. Open source code implementing all O3D services is also provided to allow computational and computer scientists to reuse and extend the processing methods. Publishing both data-derivatives and integrated processing pipeline promotes practices for scientific reproducibility and data upcycling by providing open access to the research assets for utilization by multiple scientific communities.


Asunto(s)
Encéfalo/diagnóstico por imagen , Conectoma , Imagen de Difusión por Resonancia Magnética , Algoritmos , Humanos , Neuroimagen , Programas Informáticos , Sustancia Blanca/diagnóstico por imagen
10.
Sci Rep ; 8(1): 11740, 2018 08 06.
Artículo en Inglés | MEDLINE | ID: mdl-30082818

RESUMEN

It has been proposed that neuronal populations in the prefrontal cortex (PFC) robustly encode task-relevant information through an interplay with the ventral tegmental area (VTA). Yet, the precise computation underlying such functional interaction remains elusive. Here, we conducted simultaneous recordings of single-unit activity in PFC and VTA of rats performing a GO/NoGO task. We found that mutual information between stimuli and neural activity increases in the PFC as soon as stimuli are presented. Notably, it is the activity of putative dopamine neurons in the VTA that contributes critically to enhance information coding in the PFC. The higher the activity of these VTA neurons, the better the conditioned stimuli are encoded in the PFC.


Asunto(s)
Neuronas Dopaminérgicas/citología , Neuronas Dopaminérgicas/metabolismo , Corteza Prefrontal/citología , Corteza Prefrontal/metabolismo , Área Tegmental Ventral/citología , Área Tegmental Ventral/metabolismo , Potenciales de Acción/fisiología , Animales , Masculino , Vías Nerviosas/fisiología , Ratas , Ratas Long-Evans
11.
PLoS One ; 12(12): e0188579, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-29236787

RESUMEN

The prefrontal cortex (PFC) is a key brain structure for decision making, behavioural flexibility and working memory. Neurons in PFC encode relevant stimuli through changes in their firing rate, although the metabolic cost of spiking activity puts strong constrains to neural codes based on firing rate modulation. Thus, how PFC neural populations code relevant information in an efficient way is not clearly understood. To address this issue we made single unit recordings in the PFC of rats performing a GO/NOGO discrimination task and analysed how entropy between pairs of neurons changes during cue presentation. We found that entropy rises only during reward-predicting cues. Moreover, this change in entropy occurred along an increase in the efficiency of the whole process. We studied possible mechanisms behind the efficient gain in entropy by means of a two neuron leaky integrate-and-fire model, and found that a precise relationship between synaptic efficacy and firing rate is required to explain the experimentally observed results.


Asunto(s)
Corteza Prefrontal/fisiología , Recompensa , Potenciales de Acción/fisiología , Animales , Masculino , Ratas , Ratas Long-Evans
12.
Sci Rep ; 7(1): 11491, 2017 09 13.
Artículo en Inglés | MEDLINE | ID: mdl-28904382

RESUMEN

The ability to map brain networks in living individuals is fundamental in efforts to chart the relation between human behavior, health and disease. Advances in network neuroscience may benefit from developing new frameworks for mapping brain connectomes. We present a framework to encode structural brain connectomes and diffusion-weighted magnetic resonance (dMRI) data using multidimensional arrays. The framework integrates the relation between connectome nodes, edges, white matter fascicles and diffusion data. We demonstrate the utility of the framework for in vivo white matter mapping and anatomical computing by evaluating 1,490 connectomes, thirteen tractography methods, and three data sets. The framework dramatically reduces storage requirements for connectome evaluation methods, with up to 40x compression factors. Evaluation of multiple, diverse datasets demonstrates the importance of spatial resolution in dMRI. We measured large increases in connectome resolution as function of data spatial resolution (up to 52%). Moreover, we demonstrate that the framework allows performing anatomical manipulations on white matter tracts for statistical inference and to study the white matter geometrical organization. Finally, we provide open-source software implementing the method and data to reproduce the results.


Asunto(s)
Encéfalo/fisiología , Conectoma , Adulto , Algoritmos , Mapeo Encefálico , Biología Computacional/métodos , Bases de Datos Genéticas , Imagen de Difusión por Resonancia Magnética , Humanos , Masculino , Persona de Mediana Edad , Vías Nerviosas , Reproducibilidad de los Resultados , Sustancia Blanca/fisiología
13.
PLoS One ; 11(10): e0165288, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27780261

RESUMEN

This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results.


Asunto(s)
Algoritmos , Entropía , Redes Neurales de la Computación , Dinámicas no Lineales , Distribución Normal , Relación Señal-Ruido
14.
PLoS Comput Biol ; 12(2): e1004692, 2016 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-26845558

RESUMEN

Tractography uses diffusion MRI to estimate the trajectory and cortical projection zones of white matter fascicles in the living human brain. There are many different tractography algorithms and each requires the user to set several parameters, such as curvature threshold. Choosing a single algorithm with specific parameters poses two challenges. First, different algorithms and parameter values produce different results. Second, the optimal choice of algorithm and parameter value may differ between different white matter regions or different fascicles, subjects, and acquisition parameters. We propose using ensemble methods to reduce algorithm and parameter dependencies. To do so we separate the processes of fascicle generation and evaluation. Specifically, we analyze the value of creating optimized connectomes by systematically combining candidate streamlines from an ensemble of algorithms (deterministic and probabilistic) and systematically varying parameters (curvature and stopping criterion). The ensemble approach leads to optimized connectomes that provide better cross-validated prediction error of the diffusion MRI data than optimized connectomes generated using a single-algorithm or parameter set. Furthermore, the ensemble approach produces connectomes that contain both short- and long-range fascicles, whereas single-parameter connectomes are biased towards one or the other. In summary, a systematic ensemble tractography approach can produce connectomes that are superior to standard single parameter estimates both for predicting the diffusion measurements and estimating white matter fascicles.


Asunto(s)
Encéfalo/fisiología , Conectoma/métodos , Imagen de Difusión Tensora/métodos , Imagen por Resonancia Magnética/métodos , Adulto , Biología Computacional , Humanos , Masculino , Red Nerviosa/fisiología
15.
IEEE Trans Pattern Anal Mach Intell ; 35(7): 1660-73, 2013 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-23681994

RESUMEN

A new generalized multilinear regression model, termed the higher order partial least squares (HOPLS), is introduced with the aim to predict a tensor (multiway array) Y from a tensor X through projecting the data onto the latent space and performing regression on the corresponding latent variables. HOPLS differs substantially from other regression models in that it explains the data by a sum of orthogonal Tucker tensors, while the number of orthogonal loadings serves as a parameter to control model complexity and prevent overfitting. The low-dimensional latent space is optimized sequentially via a deflation operation, yielding the best joint subspace approximation for both X and Y. Instead of decomposing X and Y individually, higher order singular value decomposition on a newly defined generalized cross-covariance tensor is employed to optimize the orthogonal loadings. A systematic comparison on both synthetic data and real-world decoding of 3D movement trajectories from electrocorticogram signals demonstrate the advantages of HOPLS over the existing methods in terms of better predictive ability, suitability to handle small sample sizes, and robustness to noise.


Asunto(s)
Electroencefalografía/métodos , Análisis de los Mínimos Cuadrados , Procesamiento de Señales Asistido por Computador , Algoritmos , Animales , Simulación por Computador , Haplorrinos , Modelos Neurológicos , Reproducibilidad de los Resultados
16.
Neural Comput ; 25(1): 186-220, 2013 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-23020110

RESUMEN

Recently there has been great interest in sparse representations of signals under the assumption that signals (data sets) can be well approximated by a linear combination of few elements of a known basis (dictionary). Many algorithms have been developed to find such representations for one-dimensional signals (vectors), which requires finding the sparsest solution of an underdetermined linear system of algebraic equations. In this letter, we generalize the theory of sparse representations of vectors to multiway arrays (tensors)--signals with a multidimensional structure--by using the Tucker model. Thus, the problem is reduced to solving a large-scale underdetermined linear system of equations possessing a Kronecker structure, for which we have developed a greedy algorithm, Kronecker-OMP, as a generalization of the classical orthogonal matching pursuit (OMP) algorithm for vectors. We also introduce the concept of multiway block-sparse representation of N-way arrays and develop a new greedy algorithm that exploits not only the Kronecker structure but also block sparsity. This allows us to derive a very fast and memory-efficient algorithm called N-BOMP (N-way block OMP). We theoretically demonstrate that under the block-sparsity assumption, our N-BOMP algorithm not only has a considerably lower complexity but is also more precise than the classic OMP algorithm. Moreover, our algorithms can be used for very large-scale problems, which are intractable using standard approaches. We provide several simulations illustrating our results and comparing our algorithms to classical algorithms such as OMP and BP (basis pursuit) algorithms. We also apply the N-BOMP algorithm as a fast solution for the compressed sensing (CS) problem with large-scale data sets, in particular, for 2D compressive imaging (CI) and 3D hyperspectral CI, and we show examples with real-world multidimensional signals.


Asunto(s)
Algoritmos , Simulación por Computador , Compresión de Datos/métodos , Teoría de la Información , Procesamiento de Señales Asistido por Computador , Modelos Lineales
17.
Neural Comput ; 21(12): 3487-518, 2009 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-19764872

RESUMEN

In this letter, we propose a new algorithm for estimating sparse nonnegative sources from a set of noisy linear mixtures. In particular, we consider difficult situations with high noise levels and more sources than sensors (underdetermined case). We show that when sources are very sparse in time and overlapped at some locations, they can be recovered even with very low signal-to-noise ratio, and by using many fewer sensors than sources. A theoretical analysis based on Bayesian estimation tools is included showing strong connections with algorithms in related areas of research such as ICA, NMF, FOCUSS, and sparse representation of data with overcomplete dictionaries. Our algorithm uses a Bayesian approach by modeling sparse signals through mixed-state random variables. This new model for priors imposes l(0) norm-based sparsity. We start our analysis for the case of nonoverlapped sources (1-sparse), which allows us to simplify the search of the posterior maximum avoiding a combinatorial search. General algorithms for overlapped cases, such as 2-sparse and k-sparse sources, are derived by using the algorithm for 1-sparse signals recursively. Additionally, a combination of our MAP algorithm with the NN-KSVD algorithm is proposed for estimating the mixing matrix and the sources simultaneously in a real blind fashion. A complete set of simulation results is included showing the performance of our algorithm.


Asunto(s)
Algoritmos , Modelos Neurológicos , Redes Neurales de la Computación , Procesamiento de Señales Asistido por Computador , Teorema de Bayes , Simulación por Computador , Humanos , Análisis de Componente Principal , Teoría de Sistemas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...