Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Neural Eng ; 15(3): 031005, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29488902

RESUMO

OBJECTIVE: Most current electroencephalography (EEG)-based brain-computer interfaces (BCIs) are based on machine learning algorithms. There is a large diversity of classifier types that are used in this field, as described in our 2007 review paper. Now, approximately ten years after this review publication, many new algorithms have been developed and tested to classify EEG signals in BCIs. The time is therefore ripe for an updated review of EEG classification algorithms for BCIs. APPROACH: We surveyed the BCI and machine learning literature from 2007 to 2017 to identify the new classification approaches that have been investigated to design BCIs. We synthesize these studies in order to present such algorithms, to report how they were used for BCIs, what were the outcomes, and to identify their pros and cons. MAIN RESULTS: We found that the recently designed classification algorithms for EEG-based BCIs can be divided into four main categories: adaptive classifiers, matrix and tensor classifiers, transfer learning and deep learning, plus a few other miscellaneous classifiers. Among these, adaptive classifiers were demonstrated to be generally superior to static ones, even with unsupervised adaptation. Transfer learning can also prove useful although the benefits of transfer learning remain unpredictable. Riemannian geometry-based methods have reached state-of-the-art performances on multiple BCI problems and deserve to be explored more thoroughly, along with tensor-based methods. Shrinkage linear discriminant analysis and random forests also appear particularly useful for small training samples settings. On the other hand, deep learning methods have not yet shown convincing improvement over state-of-the-art BCI methods. SIGNIFICANCE: This paper provides a comprehensive overview of the modern classification algorithms used in EEG-based BCIs, presents the principles of these methods and guidelines on when and how to use them. It also identifies a number of challenges to further advance EEG classification in BCI.


Assuntos
Algoritmos , Interfaces Cérebro-Computador/tendências , Encéfalo/fisiologia , Eletroencefalografia/tendências , Processamento de Sinais Assistido por Computador , Animais , Aprendizado Profundo/tendências , Eletroencefalografia/métodos , Humanos , Fatores de Tempo
2.
Comput Math Methods Med ; 2014: 317056, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24860614

RESUMO

This work investigates the use of mixed-norm regularization for sensor selection in event-related potential (ERP) based brain-computer interfaces (BCI). The classification problem is cast as a discriminative optimization framework where sensor selection is induced through the use of mixed-norms. This framework is extended to the multitask learning situation where several similar classification tasks related to different subjects are learned simultaneously. In this case, multitask learning helps in leveraging data scarcity issue yielding to more robust classifiers. For this purpose, we have introduced a regularizer that induces both sensor selection and classifier similarities. The different regularization approaches are compared on three ERP datasets showing the interest of mixed-norm regularization in terms of sensor selection. The multitask approaches are evaluated when a small number of learning examples are available yielding to significant performance improvements especially for subjects performing poorly.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Algoritmos , Área Sob a Curva , Inteligência Artificial , Interfaces Cérebro-Computador , Simulação por Computador , Potenciais Evocados , Humanos , Modelos Estatísticos , Reprodutibilidade dos Testes , Software
3.
J Neural Eng ; 8(5): 056004, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-21817778

RESUMO

In many machine learning applications, like brain-computer interfaces (BCI), high-dimensional sensor array data are available. Sensor measurements are often highly correlated and signal-to-noise ratio is not homogeneously spread across sensors. Thus, collected data are highly variable and discrimination tasks are challenging. In this work, we focus on sensor weighting as an efficient tool to improve the classification procedure. We present an approach integrating sensor weighting in the classification framework. Sensor weights are considered as hyper-parameters to be learned by a support vector machine (SVM). The resulting sensor weighting SVM (sw-SVM) is designed to satisfy a margin criterion, that is, the generalization error. Experimental studies on two data sets are presented, a P300 data set and an error-related potential (ErrP) data set. For the P300 data set (BCI competition III), for which a large number of trials is available, the sw-SVM proves to perform equivalently with respect to the ensemble SVM strategy that won the competition. For the ErrP data set, for which a small number of trials are available, the sw-SVM shows superior performances as compared to three state-of-the art approaches. Results suggest that the sw-SVM promises to be useful in event-related potentials classification, even with a small number of training trials.


Assuntos
Eletroencefalografia/instrumentação , Eletroencefalografia/métodos , Máquina de Vetores de Suporte , Interface Usuário-Computador , Algoritmos , Encéfalo/fisiologia , Mapeamento Encefálico , Eletroencefalografia/classificação , Processamento Eletrônico de Dados , Potenciais Evocados P300 , Humanos , Modelos Lineares , Processos Mentais , Dinâmica não Linear , Leitura , Reprodutibilidade dos Testes , Razão Sinal-Ruído
4.
Ultrason Imaging ; 22(2): 73-94, 2000 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-11061460

RESUMO

Speckle noise is known to be signal-dependent in ultrasound imaging. Hence, separating noise from signal becomes a difficult task. This paper describes a wavelet-based method for reducing speckle noise. We derive from the model of the displayed ultrasound image the optimal wavelet-domain filter in the least mean-square sense. Simulations on synthetic data have been carried out in order to assess the performance of the proposed filter with regards to the classical wavelet shrinkage scheme, while phantom and tissue images have been used for testing it on real data. The results show that the filter effectively reduces the speckle noise while preserving resolvable details.


Assuntos
Processamento de Sinais Assistido por Computador , Ultrassonografia/métodos , Algoritmos , Calibragem , Humanos , Fígado/diagnóstico por imagem , Imagens de Fantasmas , Curva ROC
5.
Med Biol Eng Comput ; 37(6): 750-9, 1999 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-10723883

RESUMO

An optimal wavelet filter to improve the signal-to-noise ratio (SNR) of the signal-averaged electrocardiogram is described. As the averaging technique leads to the best unbiased estimator, the challenge is to attenuate the noise while preserving the low amplitude signals that are usually embedded in it. An optimal, in the mean-square sense, wavelet-based filter has been derived from the model of the signal. However, such a filter needs exact knowledge of the noise statistic and the noise-free signal. Hence, to implement such a filter, a method based on successive sub-averaging and wavelet filtering is proposed. Its performance was evaluated using simulated and real ECGs. An improvement in SNR of between 6 and 10 dB can be achieved compared to a classical averaging technique which uses an ensemble of 64 simulated ECG beats. Tests on real ECGs demonstrate the utility of the method as it has been shown that by using fewer beats in the filtered ensemble average, one can achieve the same noise reduction. Clinical use of this technique would reduce the ensemble needed for averaging while obtaining the same diagnostic result.


Assuntos
Eletrocardiografia/métodos , Processamento de Sinais Assistido por Computador , Estudos de Avaliação como Assunto , Humanos
6.
Med Biol Eng Comput ; 36(3): 346-50, 1998 May.
Artigo em Inglês | MEDLINE | ID: mdl-9747575

RESUMO

The aim of the study is to investigate the potential of a feedforward neural network for detecting wavelet preprocessed late potentials. The terminal parts of a simulated QRS complex are processed with a continuous wavelet transform, which leads to a time-frequency representation of the QRS complex. Then, diagnostic feature vectors are obtained by subdividing the representations into several regions and by processing the sum of the decomposition coefficients belonging to each region. The neural network is trained with these feature vectors. Simulated ECGs with varying signal-to-noise ratios are used to train and test the classifier. Results show that correct classification ranges from 79% (high-level noise) to 99% (no noise). The study shows the potential of neural networks for the classification of late potentials that have been preprocessed by a wavelet transform. However, clinical use of this method still requires further investigation.


Assuntos
Arritmias Cardíacas/diagnóstico , Eletrocardiografia , Redes Neurais de Computação , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA