Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Neural Eng ; 21(3)2024 May 22.
Artigo em Inglês | MEDLINE | ID: mdl-38729132

RESUMO

Objective.This study develops a deep learning (DL) method for fast auditory attention decoding (AAD) using electroencephalography (EEG) from listeners with hearing impairment (HI). It addresses three classification tasks: differentiating noise from speech-in-noise, classifying the direction of attended speech (left vs. right) and identifying the activation status of hearing aid noise reduction algorithms (OFF vs. ON). These tasks contribute to our understanding of how hearing technology influences auditory processing in the hearing-impaired population.Approach.Deep convolutional neural network (DCNN) models were designed for each task. Two training strategies were employed to clarify the impact of data splitting on AAD tasks: inter-trial, where the testing set used classification windows from trials that the training set had not seen, and intra-trial, where the testing set used unseen classification windows from trials where other segments were seen during training. The models were evaluated on EEG data from 31 participants with HI, listening to competing talkers amidst background noise.Main results.Using 1 s classification windows, DCNN models achieve accuracy (ACC) of 69.8%, 73.3% and 82.9% and area-under-curve (AUC) of 77.2%, 80.6% and 92.1% for the three tasks respectively on inter-trial strategy. In the intra-trial strategy, they achieved ACC of 87.9%, 80.1% and 97.5%, along with AUC of 94.6%, 89.1%, and 99.8%. Our DCNN models show good performance on short 1 s EEG samples, making them suitable for real-world applications. Conclusion: Our DCNN models successfully addressed three tasks with short 1 s EEG windows from participants with HI, showcasing their potential. While the inter-trial strategy demonstrated promise for assessing AAD, the intra-trial approach yielded inflated results, underscoring the important role of proper data splitting in EEG-based AAD tasks.Significance.Our findings showcase the promising potential of EEG-based tools for assessing auditory attention in clinical contexts and advancing hearing technology, while also promoting further exploration of alternative DL architectures and their potential constraints.


Assuntos
Atenção , Percepção Auditiva , Aprendizado Profundo , Eletroencefalografia , Perda Auditiva , Humanos , Atenção/fisiologia , Feminino , Eletroencefalografia/métodos , Masculino , Pessoa de Meia-Idade , Perda Auditiva/fisiopatologia , Perda Auditiva/reabilitação , Perda Auditiva/diagnóstico , Idoso , Percepção Auditiva/fisiologia , Ruído , Adulto , Auxiliares de Audição , Percepção da Fala/fisiologia , Redes Neurais de Computação
2.
Artigo em Inglês | MEDLINE | ID: mdl-38083171

RESUMO

Attending to the speech stream of interest in multi-talker environments can be a challenging task, particularly for listeners with hearing impairment. Research suggests that neural responses assessed with electroencephalography (EEG) are modulated by listener's auditory attention, revealing selective neural tracking (NT) of the attended speech. NT methods mostly rely on hand-engineered acoustic and linguistic speech features to predict the neural response. Only recently, deep neural network (DNN) models without specific linguistic information have been used to extract speech features for NT, demonstrating that speech features in hierarchical DNN layers can predict neural responses throughout the auditory pathway. In this study, we go one step further to investigate the suitability of similar DNN models for speech to predict neural responses to competing speech observed in EEG. We recorded EEG data using a 64-channel acquisition system from 17 listeners with normal hearing instructed to attend to one of two competing talkers. Our data revealed that EEG responses are significantly better predicted by DNN-extracted speech features than by hand-engineered acoustic features. Furthermore, analysis of hierarchical DNN layers showed that early layers yielded the highest predictions. Moreover, we found a significant increase in auditory attention classification accuracies with the use of DNN-extracted speech features over the use of hand-engineered acoustic features. These findings open a new avenue for development of new NT measures to evaluate and further advance hearing technology.


Assuntos
Perda Auditiva , Percepção da Fala , Humanos , Fala/fisiologia , Percepção da Fala/fisiologia , Eletroencefalografia/métodos , Acústica
3.
J Neural Eng ; 20(6)2023 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-37988748

RESUMO

Objective.This paper presents a novel domain adaptation (DA) framework to enhance the accuracy of electroencephalography (EEG)-based auditory attention classification, specifically for classifying the direction (left or right) of attended speech. The framework aims to improve the performances for subjects with initially low classification accuracy, overcoming challenges posed by instrumental and human factors. Limited dataset size, variations in EEG data quality due to factors such as noise, electrode misplacement or subjects, and the need for generalization across different trials, conditions and subjects necessitate the use of DA methods. By leveraging DA methods, the framework can learn from one EEG dataset and adapt to another, potentially resulting in more reliable and robust classification models.Approach.This paper focuses on investigating a DA method, based on parallel transport, for addressing the auditory attention classification problem. The EEG data utilized in this study originates from an experiment where subjects were instructed to selectively attend to one of the two spatially separated voices presented simultaneously.Main results.Significant improvement in classification accuracy was observed when poor data from one subject was transported to the domain of good data from different subjects, as compared to the baseline. The mean classification accuracy for subjects with poor data increased from 45.84% to 67.92%. Specifically, the highest achieved classification accuracy from one subject reached 83.33%, a substantial increase from the baseline accuracy of 43.33%.Significance.The findings of our study demonstrate the improved classification performances achieved through the implementation of DA methods. This brings us a step closer to leveraging EEG in neuro-steered hearing devices.


Assuntos
Eletroencefalografia , Percepção da Fala , Humanos , Estimulação Acústica/métodos , Eletroencefalografia/métodos , Ruído , Atenção
4.
J Speech Lang Hear Res ; 66(11): 4575-4589, 2023 11 09.
Artigo em Inglês | MEDLINE | ID: mdl-37850878

RESUMO

PURPOSE: There is a need for tools to study real-world communication abilities in people with hearing loss. We outline a potential method for this that analyzes gaze and use it to answer the question of when and how much listeners with hearing loss look toward a new talker in a conversation. METHOD: Twenty-two older adults with hearing loss followed a prerecorded two-person audiovisual conversation in the presence of babble noise. We compared their eye-gaze direction to the conversation in two multilevel logistic regression (MLR) analyses. First, we split the conversation into events classified by the number of active talkers within a turn or a transition, and we tested if these predicted the listener's gaze. Second, we mapped the odds that a listener gazed toward a new talker over time during a conversation transition. RESULTS: We found no evidence that our conversation events predicted changes in the listener's gaze, but the listener's gaze toward the new talker during a silence-transition was predicted by time: The odds of looking at the new talker increased in an s-shaped curve from at least 0.4 s before to 1 s after the onset of the new talker's speech. A comparison of models with different random effects indicated that more variance was explained by differences between individual conversation events than by differences between individual listeners. CONCLUSIONS: MLR modeling of eye-gaze during talker transitions is a promising approach to study a listener's perception of realistic conversation. Our experience provides insight to guide future research with this method.


Assuntos
Surdez , Perda Auditiva , Percepção da Fala , Humanos , Idoso , Estimulação Acústica/métodos , Fala
5.
Front Neurosci ; 16: 873201, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35844213

RESUMO

This presentation details and evaluates a method for estimating the attended speaker during a two-person conversation by means of in-ear electro-oculography (EOG). Twenty-five hearing-impaired participants were fitted with molds equipped with EOG electrodes (in-ear EOG) and wore eye-tracking glasses while watching a video of two life-size people in a dialog solving a Diapix task. The dialogue was directionally presented and together with background noise in the frontal hemisphere at 60 dB SPL. During three conditions of steering (none, in-ear EOG, conventional eye-tracking), participants' comprehension was periodically measured using multiple-choice questions. Based on eye movement detection by in-ear EOG or conventional eye-tracking, the estimated attended speaker was amplified by 6 dB. In the in-ear EOG condition, the estimate was based on one selected channel pair of electrodes out of 36 possible electrodes. A novel calibration procedure introducing three different metrics was used to select the measurement channel. The in-ear EOG attended speaker estimates were compared to those of the eye-tracker. Across participants, the mean accuracy of in-ear EOG estimation of the attended speaker was 68%, ranging from 50 to 89%. Based on offline simulation, it was established that higher scoring metrics obtained for a channel with the calibration procedure were significantly associated with better data quality. Results showed a statistically significant improvement in comprehension of about 10% in both steering conditions relative to the no-steering condition. Comprehension in the two steering conditions was not significantly different. Further, better comprehension obtained under the in-ear EOG condition was significantly correlated with more accurate estimation of the attended speaker. In conclusion, this study shows promising results in the use of in-ear EOG for visual attention estimation with potential for applicability in hearing assistive devices.

6.
Front Digit Health ; 3: 724714, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34713193

RESUMO

Introduction: By means of adding more sensor technology, modern hearing aids (HAs) strive to become better, more personalized, and self-adaptive devices that can handle environmental changes and cope with the day-to-day fitness of the users. The latest HA technology available in the market already combines sound analysis with motion activity classification based on accelerometers to adjust settings. While there is a lot of research in activity tracking using accelerometers in sports applications and consumer electronics, there is not yet much in hearing research. Objective: This study investigates the feasibility of activity tracking with ear-level accelerometers and how it compares to waist-mounted accelerometers, which is a more common measurement location. Method: The activity classification methods in this study are based on supervised learning. The experimental set up consisted of 21 subjects, equipped with two XSens MTw Awinda at ear-level and one at waist-level, performing nine different activities. Results: The highest accuracy on our experimental data as obtained with the combination of Bagging and Classification tree techniques. The total accuracy over all activities and users was 84% (ear-level), 90% (waist-level), and 91% (ear-level + waist-level). Most prominently, the classes, namely, standing, jogging, laying (on one side), laying (face-down), and walking all have an accuracy of above 90%. Furthermore, estimated ear-level step-detection accuracy was 95% in walking and 90% in jogging. Conclusion: It is demonstrated that several activities can be classified, using ear-level accelerometers, with an accuracy that is on par with waist-level. It is indicated that step-detection accuracy is comparable to a high-performance wrist device. These findings are encouraging for the development of activity applications in hearing healthcare.

7.
Front Neurosci ; 13: 1294, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31920477

RESUMO

People with hearing impairment typically have difficulties following conversations in multi-talker situations. Previous studies have shown that utilizing eye gaze to steer audio through beamformers could be a solution for those situations. Recent studies have shown that in-ear electrodes that capture electrooculography in the ear (EarEOG) can estimate the eye-gaze relative to the head, when the head was fixed. The head movement can be estimated using motion sensors around the ear to create an estimate of the absolute eye-gaze in the room. In this study, an experiment was designed to mimic a multi-talker situation in order to study and model the EarEOG signal when participants attempted to follow a conversation. Eleven hearing impaired participants were presented speech from the DAT speech corpus (Bo Nielsen et al., 2014), with three targets positioned at -30°, 0° and +30° azimuth. The experiment was run in two setups: one where the participants had their head fixed in a chinrest, and the other where they were free to move their head. The participants' task was to focus their visual attention on an LED-indicated target that changed regularly. A model was developed for the relative eye-gaze estimation, taking saccades, fixations, head movement and drift from the electrode-skin half-cell into account. This model explained 90.5% of the variance of the EarEOG when the head was fixed, and 82.6% when the head was free. The absolute eye-gaze was also estimated utilizing that model. When the head was fixed, the estimation of the absolute eye-gaze was reliable. However, due to hardware issues, the estimation of the absolute eye-gaze when the head was free had a variance that was too large to reliably estimate the attended target. Overall, this study demonstrated the potential of estimating absolute eye-gaze using EarEOG and motion sensors around the ear.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA