Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Neuroimage ; 204: 116211, 2020 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-31546052

RESUMO

A common problem in neural recordings is the low signal-to-noise ratio (SNR), particularly when using non-invasive techniques like magneto- or electroencephalography (M/EEG). To address this problem, experimental designs often include repeated trials, which are then averaged to improve the SNR or to infer statistics that can be used in the design of a denoising spatial filter. However, collecting enough repeated trials is often impractical and even impossible in some paradigms, while analyses on existing data sets may be hampered when these do not contain such repeated trials. Therefore, we present a data-driven method that takes advantage of the knowledge of the presented stimulus, to achieve a joint noise reduction and dimensionality reduction without the need for repeated trials. The method first estimates the stimulus-driven neural response using the given stimulus, which is then used to find a set of spatial filters that maximize the SNR based on a generalized eigenvalue decomposition. As the method is fully data-driven, the dimensionality reduction enables researchers to perform their analyses without having to rely on their knowledge of brain regions of interest, which increases accuracy and reduces the human factor in the results. In the context of neural tracking of a speech stimulus using EEG, our method resulted in more accurate short-term temporal response function (TRF) estimates, higher correlations between predicted and actual neural responses, and higher attention decoding accuracies compared to existing TRF-based decoding methods. We also provide an extensive discussion on the central role played by the generalized eigenvalue decomposition in various denoising methods in the literature, and address the conceptual similarities and differences with our proposed method.


Assuntos
Algoritmos , Atenção/fisiologia , Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Eletroencefalografia/métodos , Eletroencefalografia/normas , Neuroimagem Funcional/métodos , Processamento de Sinais Assistido por Computador , Adolescente , Adulto , Artefatos , Feminino , Neuroimagem Funcional/normas , Humanos , Masculino , Reprodutibilidade dos Testes , Estudos de Caso Único como Assunto , Percepção da Fala/fisiologia , Fatores de Tempo , Adulto Jovem
2.
PLoS One ; 19(8): e0301406, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39121107

RESUMO

Recently the 1/f signal of human electroencephalography has attracted attention, as it could potentially reveal a quantitative measure of neural excitation and inhibition in the brain, that may be relevant in a clinical setting. The purpose of this short article is to show that the 1/f signal depends on the vigilance state of the brain in both humans and mice. Therefore, proper labelling of the EEG signal is important as improper labelling may obscure disease-related changes in the 1/f signal. We demonstrate this by comparing EEG results from a longitudinal study in a genetic mouse model for synaptic dysfunction in schizophrenia and autism spectrum disorders to results from a large European cohort study with schizophrenia and mild Alzheimer's disease patients. The comparison shows when the 1/f is corrected for vigilance state there is a difference between groups, and this effect disappears when vigilance state is not corrected for. In conclusion, more attention should be paid to the vigilance state during analysis of EEG signals regardless of the species.


Assuntos
Encéfalo , Eletroencefalografia , Animais , Camundongos , Humanos , Masculino , Encéfalo/fisiopatologia , Esquizofrenia/fisiopatologia , Feminino , Doença de Alzheimer/fisiopatologia , Idoso , Nível de Alerta/fisiologia , Pessoa de Meia-Idade , Transtorno do Espectro Autista/fisiopatologia , Estudos Longitudinais
3.
Elife ; 102021 04 30.
Artigo em Inglês | MEDLINE | ID: mdl-33929315

RESUMO

In a multi-speaker scenario, the human auditory system is able to attend to one particular speaker of interest and ignore the others. It has been demonstrated that it is possible to use electroencephalography (EEG) signals to infer to which speaker someone is attending by relating the neural activity to the speech signals. However, classifying auditory attention within a short time interval remains the main challenge. We present a convolutional neural network-based approach to extract the locus of auditory attention (left/right) without knowledge of the speech envelopes. Our results show that it is possible to decode the locus of attention within 1-2 s, with a median accuracy of around 81%. These results are promising for neuro-steered noise suppression in hearing aids, in particular in scenarios where per-speaker envelopes are unavailable.


Assuntos
Atenção , Percepção da Fala , Estimulação Acústica , Eletroencefalografia , Humanos , Masculino , Redes Neurais de Computação , Som , Fala
4.
J Neural Eng ; 17(4): 046039, 2020 08 19.
Artigo em Inglês | MEDLINE | ID: mdl-32679578

RESUMO

OBJECTIVE: A hearing aid's noise reduction algorithm cannot infer to which speaker the user intends to listen to. Auditory attention decoding (AAD) algorithms allow to infer this information from neural signals, which leads to the concept of neuro-steered hearing aids. We aim to evaluate and demonstrate the feasibility of AAD-supported speech enhancement in challenging noisy conditions based on electroencephalography recordings. APPROACH: The AAD performance with a linear versus a deep neural network (DNN) based speaker separation was evaluated for same-gender speaker mixtures using three different speaker positions and three different noise conditions. MAIN RESULTS: AAD results based on the linear approach were found to be at least on par and sometimes even better than pure DNN-based approaches in terms of AAD accuracy in all tested conditions. However, when using the DNN to support a linear data-driven beamformer, a performance improvement over the purely linear approach was obtained in the most challenging scenarios. The use of multiple microphones was also found to improve speaker separation and AAD performance over single-microphone systems. SIGNIFICANCE: Recent proof-of-concept studies in this context each focus on a different method in a different experimental setting, which makes it hard to compare them. Furthermore, they are tested in highly idealized experimental conditions, which are still far from a realistic hearing aid setting. This work provides a systematic comparison of a linear and non-linear neuro-steered speech enhancement model, as well as a more realistic validation in challenging conditions.


Assuntos
Aprendizado Profundo , Percepção da Fala , Estimulação Acústica , Atenção , Eletroencefalografia , Fala
5.
J Neural Eng ; 15(6): 066017, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30207293

RESUMO

OBJECTIVE: A listener's neural responses can be decoded to identify the speaker the person is attending to in a cocktail party environment. Such auditory attention detection methods have the potential to provide noise suppression algorithms in hearing devices with information about the listener's attention. A challenge is the effect of noise and other acoustic conditions that can reduce the attention detection accuracy. Specifically, noise can impact the ability of the person to segregate the sound sources and perform selective attention, as well as the external signal processing necessary to decode the attention effectively. The aim of this work is to systematically analyze the effect of noise level and speaker position on attention decoding accuracy. APPROACH: 28 subjects participated in the experiment. Auditory stimuli consisted of stories narrated by different speakers from two different locations, along with surrounding multi-talker background babble. EEG signals of the subjects were recorded while they focused on one story and ignored the other. The strength of the babble noise as well as the spatial separation between the two speakers were varied between presentations. Spatio-temporal decoders were trained for each subject, and applied to decode attention of the subjects from every 30 s segment of data. Behavioral speech recognition thresholds were obtained for the different speaker separations. MAIN RESULTS: Both the background noise level and the angular separation between speakers affected attention decoding accuracy. Remarkably, attention decoding performance was seen to increase with the inclusion of moderate background noise (versus no noise), while across the different noise conditions performance dropped significantly with increasing noise level. We also observed that decoding accuracy improved with increasing speaker separation, exhibiting the advantage of spatial release from masking. Furthermore, the effect of speaker separation on the decoding accuracy became stronger when the background noise level increased. A significant correlation between speech intelligibility and attention decoding accuracy was found across conditions. SIGNIFICANCE: This work shows how the background noise level and relative positions of competing talkers impact attention decoding accuracy. It indicates in which circumstances a neuro-steered noise suppression system may need to operate, in function of acoustic conditions. It also indicates the boundary conditions for the operation of EEG-based attention detection systems in neuro-steered hearing prostheses.


Assuntos
Atenção/fisiologia , Eletroencefalografia/métodos , Estimulação Acústica , Percepção Auditiva , Implantes Cocleares , Estimulação Elétrica , Feminino , Humanos , Masculino , Próteses Neurais , Ruído , Desempenho Psicomotor , Razão Sinal-Ruído , Inteligibilidade da Fala , Percepção da Fala , Adulto Jovem
6.
Trends Hear ; 22: 2331216518802702, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30289020

RESUMO

In clinical practice and research, speech intelligibility is generally measured by instructing the participant to recall sentences. Although this is a reliable and highly repeatable measure, it cannot be used to measure intelligibility of connected discourse. Therefore, we developed a new method, the self-assessed Békesy procedure, which is an adaptive procedure that uses intelligibility ratings to converge to a person's speech reception threshold. In this study, we describe the new procedure and the validation in young, normal-hearing listeners. First, we compared the results on the self-assessed Békesy procedure to a recall procedure for standardized sentences. Next, we evaluated the inter- and intrasubject variability of our procedure. Furthermore, we compared the thresholds for sentences in three masker types between the self-assessed Békesy and a recall procedure to verify if these procedures resulted in similar conclusions. Finally, we compared the thresholds for two types of sentences and commercial recordings of stories. In general, the self-assessed Békesy procedure is shown to be a valid and reliable procedure as similar thresholds (difference < 1 dB) and test-retest reliability (< 1.5 dB) were observed compared with standard speech audiometry tests. In addition, the time efficiency and similar differences between maskers to a recall procedure support the potential of this procedure to be implemented in research. Finally, significant differences between the thresholds of sentences and connected discourse materials were found, indicating the importance of controlling for differences in intelligibility when presenting these materials at the same signal-to-noise ratios or when comparing studies.


Assuntos
Audiometria de Tons Puros/métodos , Limiar Auditivo/fisiologia , Audição/fisiologia , Autoavaliação (Psicologia) , Inteligibilidade da Fala/fisiologia , Adolescente , Adulto , Audiometria da Fala/métodos , Feminino , Voluntários Saudáveis , Humanos , Masculino , Valores de Referência , Razão Sinal-Ruído , Estatísticas não Paramétricas , Adulto Jovem
7.
IEEE Trans Neural Syst Rehabil Eng ; 25(5): 402-412, 2017 05.
Artigo em Inglês | MEDLINE | ID: mdl-27244743

RESUMO

This paper considers the auditory attention detection (AAD) paradigm, where the goal is to determine which of two simultaneous speakers a person is attending to. The paradigm relies on recordings of the listener's brain activity, e.g., from electroencephalography (EEG). To perform AAD, decoded EEG signals are typically correlated with the temporal envelopes of the speech signals of the separate speakers. In this paper, we study how the inclusion of various degrees of auditory modelling in this speech envelope extraction process affects the AAD performance, where the best performance is found for an auditory-inspired linear filter bank followed by power law compression. These two modelling stages are computationally cheap, which is important for implementation in wearable devices, such as future neuro-steered auditory prostheses. We also introduce a more natural way to combine recordings (over trials and subjects) to train the decoder, which reduces the dependence of the algorithm on regularization parameters. Finally, we investigate the simultaneous design of the EEG decoder and the audio subband envelope recombination weights vector using either a norm-constrained least squares or a canonical correlation analysis, but conclude that this increases computational complexity without improving AAD performance.


Assuntos
Atenção/fisiologia , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Eletroencefalografia/métodos , Reconhecimento Automatizado de Padrão/métodos , Reconhecimento Fisiológico de Modelo/fisiologia , Espectrografia do Som/métodos , Adolescente , Adulto , Algoritmos , Biomimética/métodos , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Masculino , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Razão Sinal-Ruído , Adulto Jovem
8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 77-80, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28268285

RESUMO

State-of-the-art hearing prostheses are equipped with acoustic noise reduction algorithms to improve speech intelligibility. Currently, one of the major challenges is to perform acoustic noise reduction in so-called cocktail party scenarios with multiple speakers, in particular because it is difficult-if not impossible-for the algorithm to determine which are the target speaker(s) that should be enhanced, and which speaker(s) should be treated as interfering sources. Recently, it has been shown that electroencephalography (EEG) can be used to perform auditory attention detection, i.e., to detect to which speaker a subject is attending based on recordings of neural activity. In this paper, we combine such an EEG-based auditory attention detection (AAD) paradigm with an acoustic noise reduction algorithm based on the multi-channel Wiener filter (MWF), leading to a neuro-steered MWF. In particular, we analyze how the AAD accuracy affects the noise suppression performance of an adaptive MWF in a sliding-window implementation, where the user switches his attention between two speakers.


Assuntos
Algoritmos , Implantes Cocleares , Eletroencefalografia/métodos , Inteligibilidade da Fala , Atenção , Humanos , Ruído , Razão Sinal-Ruído , Percepção da Fala
9.
J Neural Eng ; 13(5): 056014, 2016 10.
Artigo em Inglês | MEDLINE | ID: mdl-27618842

RESUMO

OBJECTIVE: We consider the problem of Auditory Attention Detection (AAD), where the goal is to detect which speaker a person is attending to, in a multi-speaker environment, based on neural activity. This work aims to analyze the influence of head-related filtering and ear-specific decoding on the performance of an AAD algorithm. APPROACH: We recorded high-density EEG of 16 normal-hearing subjects as they listened to two speech streams while tasked to attend to the speaker in either their left or right ear. The attended ear was switched between trials. The speech stimuli were administered either dichotically, or after filtering using Head-Related Transfer Functions (HRTFs). A spatio-temporal decoder was trained and used to reconstruct the attended stimulus envelope, and the correlations between the reconstructed and the original stimulus envelopes were used to perform AAD, and arrive at a percentage correct score over all trials. MAIN RESULTS: We found that the HRTF condition resulted in significantly higher AAD performance than the dichotic condition. However, speech intelligibility, measured under the same set of conditions, was lower for the HRTF filtered stimuli. We also found that decoders trained and tested for a specific attended ear performed better, compared to decoders trained and tested for both left and right attended ear simultaneously. In the context of the decoders supporting hearing prostheses, the former approach is less realistic, and studies in which each subject always had to attend to the same ear may find over-optimistic results. SIGNIFICANCE: This work shows the importance of using realistic binaural listening conditions and training on a balanced set of experimental conditions to obtain results that are more representative for the true AAD performance in practical applications.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Orelha , Cabeça , Estimulação Acústica , Adolescente , Adulto , Algoritmos , Interfaces Cérebro-Computador , Implantes Cocleares , Testes com Listas de Dissílabos , Eletroencefalografia , Feminino , Lateralidade Funcional/fisiologia , Humanos , Masculino , Próteses Neurais , Inteligibilidade da Fala , Percepção da Fala/fisiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA