Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 216
Filtrar
1.
Adv Sci (Weinh) ; : e2401379, 2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39248654

RESUMEN

Focusing on a specific conversation amidst multiple interfering talkers is challenging, especially for those with hearing loss. Brain-controlled assistive hearing devices aim to alleviate this problem by enhancing the attended speech based on the listener's neural signals using auditory attention decoding (AAD). Departing from conventional AAD studies that relied on oversimplified scenarios with stationary talkers, a realistic AAD task that involves multiple talkers taking turns as they continuously move in space in background noise is presented. Invasive electroencephalography (iEEG) data are collected from three neurosurgical patients as they focused on one of the two moving conversations. An enhanced brain-controlled assistive hearing system that combines AAD and a binaural speaker-independent speech separation model is presented. The separation model unmixes talkers while preserving their spatial location and provides talker trajectories to the neural decoder to improve AAD accuracy. Subjective and objective evaluations show that the proposed system enhances speech intelligibility and facilitates conversation tracking while maintaining spatial cues and voice quality in challenging acoustic environments. This research demonstrates the potential of this approach in real-world scenarios and marks a significant step toward developing assistive hearing technologies that adapt to the intricate dynamics of everyday auditory experiences.

2.
J Clin Med ; 13(16)2024 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-39200929

RESUMEN

Background/Objectives: Autism spectrum disorder (ASD) is a lifelong neurodevelopmental condition characterised by impairments in social communication, sensory abnormalities, and attentional deficits. Children with ASD often face significant challenges with speech perception and auditory attention, particularly in noisy environments. This study aimed to assess the effectiveness of noise cancelling Bluetooth earbuds (Nuheara IQbuds Boost) in improving speech perception and auditory attention in children with ASD. Methods: Thirteen children aged 6-13 years diagnosed with ASD participated. Pure tone audiometry confirmed normal hearing levels. Speech perception in noise was measured using the Consonant-Nucleus-Consonant-Word test, and auditory/visual attention was evaluated via the Integrated Visual and Auditory Continuous Performance Task. Participants completed these assessments both with and without the IQbuds in situ. A two-week device trial evaluated classroom listening and communication improvements using the Listening Inventory for Education-Revised (teacher version) questionnaire. Results: Speech perception in noise was significantly poorer for the ASD group compared to typically developing peers and did not change with the IQbuds. Auditory attention, however, significantly improved when the children were using the earbuds. Additionally, classroom listening and communication improved significantly after the two-week device trial. Conclusions: While the noise cancelling earbuds did not enhance speech perception in noise for children with ASD, they significantly improved auditory attention and classroom listening behaviours. These findings suggest that Bluetooth earbuds could be a viable alternative to remote microphone systems for enhancing auditory attention in children with ASD, offering benefits in classroom settings and potentially minimising the stigma associated with traditional assistive listening devices.

3.
Neural Netw ; 179: 106580, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39096751

RESUMEN

Auditory Attention Detection (AAD) aims to detect the target speaker from brain signals in a multi-speaker environment. Although EEG-based AAD methods have shown promising results in recent years, current approaches primarily rely on traditional convolutional neural networks designed for processing Euclidean data like images. This makes it challenging to handle EEG signals, which possess non-Euclidean characteristics. In order to address this problem, this paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input. Specifically, to effectively represent the non-Euclidean properties of EEG signals, dynamical graph convolutional networks are applied to represent the graph structure of EEG signals, which can also extract crucial features related to auditory spatial attention in EEG signals. In addition, to further improve AAD detection performance, self-distillation, consisting of feature distillation and hierarchical distillation strategies at each layer, is integrated. These strategies leverage features and classification results from the deepest network layers to guide the learning of shallow layers. Our experiments are conducted on two publicly available datasets, KUL and DTU. Under a 1-second time window, we achieve results of 90.0% and 79.6% accuracy on KUL and DTU, respectively. We compare our DGSD method with competitive baselines, and the experimental results indicate that the detection performance of our proposed DGSD method is not only superior to the best reproducible baseline but also significantly reduces the number of trainable parameters by approximately 100 times.


Asunto(s)
Atención , Electroencefalografía , Redes Neurales de la Computación , Electroencefalografía/métodos , Humanos , Atención/fisiología , Percepción Auditiva/fisiología , Encéfalo/fisiología , Estimulación Acústica/métodos , Algoritmos
4.
Neurosci Biobehav Rev ; 164: 105814, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39032842

RESUMEN

Visuomanual prism adaptation (PA), which consists of pointing to visual targets while wearing prisms that shift the visual field, is one of the oldest experimental paradigms used to investigate sensorimotor plasticity. Since the 2000's, a growing scientific interest emerged for the expansion of PA to cognitive functions in several sensory modalities. The present work focused on the aftereffects of PA within the auditory modality. Recent studies showed changes in mental representation of auditory frequencies and a shift of divided auditory attention following PA. Moreover, one study demonstrated benefits of PA in a patient suffering from tinnitus. According to these results, we tried to shed light on the following question: How could this be possible to modulate audition by inducing sensorimotor plasticity with glasses? Based on the literature, we suggest a bottom-up attentional mechanism involving cerebellar, parietal, and temporal structures to explain crossmodal aftereffects of PA. This review opens promising new avenues of research about aftereffects of PA in audition and its implication in the therapeutic field of auditory troubles.


Asunto(s)
Adaptación Fisiológica , Percepción Auditiva , Humanos , Percepción Auditiva/fisiología , Adaptación Fisiológica/fisiología , Percepción Visual/fisiología , Atención/fisiología , Efecto Tardío Figurativo/fisiología
5.
J Neural Eng ; 21(4)2024 Jul 16.
Artículo en Inglés | MEDLINE | ID: mdl-38936398

RESUMEN

Objective.Measures of functional connectivity (FC) can elucidate which cortical regions work together in order to complete a variety of behavioral tasks. This study's primary objective was to expand a previously published model of measuring FC to include multiple subjects and several regions of interest. While FC has been more extensively investigated in vision and other sensorimotor tasks, it is not as well understood in audition. The secondary objective of this study was to investigate how auditory regions are functionally connected to other cortical regions when attention is directed to different distinct auditory stimuli.Approach.This study implements a linear dynamic system (LDS) to measure the structured time-lagged dependence across several cortical regions in order to estimate their FC during a dual-stream auditory attention task.Results.The model's output shows consistent functionally connected regions across different listening conditions, indicative of an auditory attention network that engages regardless of endogenous switching of attention or different auditory cues being attended.Significance.The LDS implemented in this study implements a multivariate autoregression to infer FC across cortical regions during an auditory attention task. This study shows how a first-order autoregressive function can reliably measure functional connectivity from M/EEG data. Additionally, the study shows how auditory regions engage with the supramodal attention network outlined in the visual attention literature.


Asunto(s)
Atención , Electroencefalografía , Humanos , Electroencefalografía/métodos , Masculino , Femenino , Atención/fisiología , Adulto , Estimulación Acústica/métodos , Adulto Joven , Modelos Lineales , Percepción Auditiva/fisiología , Corteza Auditiva/fisiología , Magnetoencefalografía/métodos , Red Nerviosa/fisiología
6.
J Neurosci ; 44(30)2024 Jul 24.
Artículo en Inglés | MEDLINE | ID: mdl-38886058

RESUMEN

Completely ignoring a salient distractor presented concurrently with a target is difficult, and sometimes attention is involuntarily attracted to the distractor's location (attentional capture). Employing the N2ac component as a marker of attention allocation toward sounds, in this study we investigate the spatiotemporal dynamics of auditory attention across two experiments. Human participants (male and female) performed an auditory search task, where the target was accompanied by a distractor in two-third of the trials. For a distractor more salient than the target (Experiment 1), we observe not only a distractor N2ac (indicating attentional capture) but the full chain of attentional dynamics implied by the notion of attentional capture, namely, (1) the distractor captures attention before the target is attended, (2) allocation of attention to the target is delayed by distractor presence, and (3) the target is attended after the distractor. Conversely, for a distractor less salient than the target (Experiment 2), although responses were delayed, no attentional capture was observed. Together, these findings reveal two types of spatial attentional dynamics in the auditory modality (distraction with and without attentional capture).


Asunto(s)
Estimulación Acústica , Atención , Percepción Auditiva , Percepción Espacial , Humanos , Femenino , Masculino , Atención/fisiología , Adulto , Adulto Joven , Estimulación Acústica/métodos , Percepción Auditiva/fisiología , Percepción Espacial/fisiología , Tiempo de Reacción/fisiología , Electroencefalografía
7.
J Neurophysiol ; 132(2): 514-526, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-38896795

RESUMEN

The vestigial pinna-orienting system in humans is capable of increasing the activity of several auricular muscles in response to lateralized transient auditory stimuli. For example, transient increases in electromyographic activity in the posterior auricular muscle (PAM) to an attention-capturing stimulus have been documented. For the current study, surface electromyograms (EMGs) were recorded from the PAMs and superior auricular muscles (SAMs) of 10 normal-hearing participants. During the experiments, lateralized transient auditory stimuli, such as a crying baby, a shattering vase, or the participant's first names, were presented. These transient stimuli were either presented in silence or when participants actively listened to a podcast. Although ipsilateral PAM activity increased in response to transient stimuli, the SAM displayed the opposite behavior, i.e., a brief, ipsilateral suppression of activity. This suppression of ipsilateral SAM activity was more frequent on the right (75%) than left side (35%), whereas an ipsilateral PAM increase was roughly equal in prevalence on the two sides (left: 90%, right: 95%). During the active listening task, SAM suppression on the right ear was significantly larger in response to ipsilateral stimuli, compared with contralateral ones (P = 0.002), whereas PAM activity increased significantly (P = 0.002). Overall, this study provides evidence of a systematic transient suppression of the SAM during exogenous attention. This could suggest a more complex system than previously assumed, as the presence of synchronized excitatory and inhibitory components in different auricular muscles points toward a coordinated attempt at reflexively orienting the pinna toward a sound.NEW & NOTEWORTHY This study provides evidence that two auricular muscles in humans, the posterior and superior auricular muscles (PAM, SAM), react fundamentally different to lateralized transient auditory stimuli, especially during active listening. Although the PAM reacts with a transient increase in ipsilateral activity, ongoing ipsilateral SAM activity is briefly suppressed at the same time. This indicates the presence of a more complex and nuanced pinna-orienting system, with synchronized excitatory and inhibitory components in humans, than previously suspected.


Asunto(s)
Electromiografía , Humanos , Masculino , Femenino , Adulto , Músculo Esquelético/fisiología , Adulto Joven , Estimulación Acústica , Pabellón Auricular/fisiología , Reflejo/fisiología
8.
J Neural Eng ; 21(3)2024 Jun 20.
Artículo en Inglés | MEDLINE | ID: mdl-38834062

RESUMEN

Objective.In this study, we use electroencephalography (EEG) recordings to determine whether a subject is actively listening to a presented speech stimulus. More precisely, we aim to discriminate between an active listening condition, and a distractor condition where subjects focus on an unrelated distractor task while being exposed to a speech stimulus. We refer to this task as absolute auditory attention decoding.Approach.We re-use an existing EEG dataset where the subjects watch a silent movie as a distractor condition, and introduce a new dataset with two distractor conditions (silently reading a text and performing arithmetic exercises). We focus on two EEG features, namely neural envelope tracking (NET) and spectral entropy (SE). Additionally, we investigate whether the detection of such an active listening condition can be combined with a selective auditory attention decoding (sAAD) task, where the goal is to decide to which of multiple competing speakers the subject is attending. The latter is a key task in so-called neuro-steered hearing devices that aim to suppress unattended audio, while preserving the attended speaker.Main results.Contrary to a previous hypothesis of higher SE being related with actively listening rather than passively listening (without any distractors), we find significantly lower SE in the active listening condition compared to the distractor conditions. Nevertheless, the NET is consistently significantly higher when actively listening. Similarly, we show that the accuracy of a sAAD task improves when evaluating the accuracy only on the highest NET segments. However, the reverse is observed when evaluating the accuracy only on the lowest SE segments.Significance.We conclude that the NET is more reliable for decoding absolute auditory attention as it is consistently higher when actively listening, whereas the relation of the SE between actively and passively listening seems to depend on the nature of the distractor.


Asunto(s)
Atención , Electroencefalografía , Percepción del Habla , Humanos , Atención/fisiología , Electroencefalografía/métodos , Femenino , Masculino , Percepción del Habla/fisiología , Adulto , Adulto Joven , Estimulación Acústica/métodos , Percepción Auditiva/fisiología
9.
Indian J Otolaryngol Head Neck Surg ; 76(3): 2250-2256, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38883545

RESUMEN

Attention is a fundamental aspect of human cognitive function and is crucial for essential activities such as learning, social interaction, and routine tasks. Notably, Auditory attention involves complex interactions and collaboration among multiple brain networks. Recognizing the impairment of auditory attention, comprehending its underlying mechanisms, and identifying the activated brain regions essential for the development of treatments and interventions for individuals facing auditory attention deficits, emphasizes the significance of investigating these matters. In the current study, we conducted a review by searching for the full text of 53 articles published related to auditory attention, mechanisms, and networks in databases like Science Direct, Google Scholar, ProQuest, and PubMed using the keywords Attention, Auditory Attention, Auditory Attention Impairment, theories of attention were investigated in the years 2000 to 2023 And focused on articles that provided discussions within this research domain. The studies have demonstrated that auditory attention exceeds being an acoustic attribute and assumes a fundamental role in complex acoustic environments, information processing, and even speech comprehension. In the context of this study, we have conducted a review and summary of the proposed theories related to attention and the brain networks involved in different forms of auditory attention. In conclusion, the integration of auditory attention assessments, behavioral observations, and an understanding of the neural mechanisms and brain regions implicated in auditory attention proves to be an effective approach for the diagnosis and treatment of attention-related disorders.

10.
Audiol Neurootol ; : 1-11, 2024 Jun 14.
Artículo en Inglés | MEDLINE | ID: mdl-38880084

RESUMEN

OBJECTIVES: The primary goal was to investigate the suitability of CHAPS for assessing cognitive abilities and auditory processing in people with hearing loss (HL), specifically in the domains of auditory processing, verbal working memory, and auditory attention. METHOD: The study comprised 44 individuals between the ages of seven and 14, 22 with HL (N = 11 males) and 22 with normal hearing (N = 10 males). Individuals' auditory attention, working memory, and auditory processing skills were assessed in the study, and self-report questionnaires were used. The evaluation utilized the Sustained Auditory Attention Capacity Test (SAACT), Working Memory Scale (WMS), Filtered Words Test, Auditory Figured Ground Test (AFGT), and the Children's Auditory Performance Scale (CHAPS). Analyses were conducted, including group comparisons, correlation examinations, and receiver operating characteristic evaluations. RESULTS: There were significant differences in CHAPS total, attention, noise, quiet, and multiple inputs between groups. No significant differences were seen in CHAPS_ideal and CHAPS_auditory memory across groups. The study of SAACT and its subscores, WMS and its subscores, FWT, and AFGT revealed a significant difference between groups, caused by the poor performance of persons in the HL group compared to those in the NH group. The SAACT and its subscores correlated significantly with CHAPS_attention. The AUC calculation showed that The SAACT and CHAPS_attention distinguished persons with or without HL (p < 0.05). WMS_STM and WMS_total correlated with CHAPS auditory memory subscale; however, WMS_VWM did not. AUC values for WMS and its subscores showed significant discrimination in identifying children with or without HL (p < 0.05), whereas CHAPS_auditory memory did not (AUC = 0.665; p = 0.060). FWT and AFGT had a significant relationship with CHAPS_noise and CHAPS_multiple inputs subscales. The CHAPS_quiet and CHAPS_ideal subtests only correlated with AFGT. CHAPS_quite and CHAPS_ideal did not exhibit significant discriminative values (p < 0.05) for identifying children with or without HL, while CHAPS_noise, CHAPS_multiple inputs, FWT, and AFGT did. CONCLUSION: The CHAPS_attention subscale could be a trustworthy instrument for assessing auditory attention in children with HL. However, the CHAPS_auditory memory subscale may not be suitable for testing working memory. While performance-based auditory processing tests showed improved discrimination, the CHAPS_noise and CHAPS_multiple inputs subtests can still assess hearing-impaired auditory processing. The CHAPS_quiet and CHAPS_ideal subtests may not evaluate auditory processing.

11.
Front Hum Neurosci ; 18: 1382959, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38818032

RESUMEN

Balancing is a very important skill, supporting many daily life activities. Cognitive-motor interference (CMI) dual-tasking paradigms have been established to identify the cognitive load of complex natural motor tasks, such as running and cycling. Here we used wireless, smartphone-recorded electroencephalography (EEG) and motion sensors while participants were either standing on firm ground or on a slackline, either performing an auditory oddball task (dual-task condition) or no task simultaneously (single-task condition). We expected a reduced amplitude and increased latency of the P3 event-related potential (ERP) component to target sounds for the complex balancing compared to the standing on ground condition, and a further decrease in the dual-task compared to the single-task balancing condition. Further, we expected greater postural sway during slacklining while performing the concurrent auditory attention task. Twenty young, experienced slackliners performed an auditory oddball task, silently counting rare target tones presented in a series of frequently occurring standard tones. Results revealed similar P3 topographies and morphologies during both movement conditions. Contrary to our predictions we observed neither significantly reduced P3 amplitudes, nor significantly increased latencies during slacklining. Unexpectedly, we found greater postural sway during slacklining with no additional task compared to dual-tasking. Further, we found a significant correlation between the participant's skill level and P3 latency, but not between skill level and P3 amplitude or postural sway. This pattern of results indicates an interference effect for less skilled individuals, whereas individuals with a high skill level may have shown a facilitation effect. Our study adds to the growing field of research demonstrating that ERPs obtained in uncontrolled, daily-life situations can provide meaningful results. We argue that the individual CMI effects on the P3 ERP reflects how demanding the balancing task is for untrained individuals, which draws on limited resources that are otherwise available for auditory attention processing. In future work, the analysis of concurrently recorded motion-sensor signals will help to identify the cognitive demands of motor tasks executed in natural, uncontrolled environments.

12.
J Neural Eng ; 21(3)2024 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-38776893

RESUMEN

Objective: Decoding auditory attention from brain signals is essential for the development of neuro-steered hearing aids. This study aims to overcome the challenges of extracting discriminative feature representations from electroencephalography (EEG) signals for auditory attention detection (AAD) tasks, particularly focusing on the intrinsic relationships between different EEG channels.Approach: We propose a novel attention-guided graph structure learning network, AGSLnet, which leverages potential relationships between EEG channels to improve AAD performance. Specifically, AGSLnet is designed to dynamically capture latent relationships between channels and construct a graph structure of EEG signals.Main result: We evaluated AGSLnet on two publicly available AAD datasets and demonstrated its superiority and robustness over state-of-the-art models. Visualization of the graph structure trained by AGSLnet supports previous neuroscience findings, enhancing our understanding of the underlying neural mechanisms.Significance: This study presents a novel approach for examining brain functional connections, improving AAD performance in low-latency settings, and supporting the development of neuro-steered hearing aids.


Asunto(s)
Atención , Electroencefalografía , Humanos , Electroencefalografía/métodos , Atención/fisiología , Percepción Auditiva/fisiología , Redes Neurales de la Computación , Estimulación Acústica/métodos , Masculino , Adulto , Femenino , Encéfalo/fisiología
13.
J Neural Eng ; 21(3)2024 May 22.
Artículo en Inglés | MEDLINE | ID: mdl-38729132

RESUMEN

Objective.This study develops a deep learning (DL) method for fast auditory attention decoding (AAD) using electroencephalography (EEG) from listeners with hearing impairment (HI). It addresses three classification tasks: differentiating noise from speech-in-noise, classifying the direction of attended speech (left vs. right) and identifying the activation status of hearing aid noise reduction algorithms (OFF vs. ON). These tasks contribute to our understanding of how hearing technology influences auditory processing in the hearing-impaired population.Approach.Deep convolutional neural network (DCNN) models were designed for each task. Two training strategies were employed to clarify the impact of data splitting on AAD tasks: inter-trial, where the testing set used classification windows from trials that the training set had not seen, and intra-trial, where the testing set used unseen classification windows from trials where other segments were seen during training. The models were evaluated on EEG data from 31 participants with HI, listening to competing talkers amidst background noise.Main results.Using 1 s classification windows, DCNN models achieve accuracy (ACC) of 69.8%, 73.3% and 82.9% and area-under-curve (AUC) of 77.2%, 80.6% and 92.1% for the three tasks respectively on inter-trial strategy. In the intra-trial strategy, they achieved ACC of 87.9%, 80.1% and 97.5%, along with AUC of 94.6%, 89.1%, and 99.8%. Our DCNN models show good performance on short 1 s EEG samples, making them suitable for real-world applications. Conclusion: Our DCNN models successfully addressed three tasks with short 1 s EEG windows from participants with HI, showcasing their potential. While the inter-trial strategy demonstrated promise for assessing AAD, the intra-trial approach yielded inflated results, underscoring the important role of proper data splitting in EEG-based AAD tasks.Significance.Our findings showcase the promising potential of EEG-based tools for assessing auditory attention in clinical contexts and advancing hearing technology, while also promoting further exploration of alternative DL architectures and their potential constraints.


Asunto(s)
Atención , Percepción Auditiva , Aprendizaje Profundo , Electroencefalografía , Pérdida Auditiva , Humanos , Atención/fisiología , Femenino , Electroencefalografía/métodos , Masculino , Persona de Mediana Edad , Pérdida Auditiva/fisiopatología , Pérdida Auditiva/rehabilitación , Pérdida Auditiva/diagnóstico , Anciano , Percepción Auditiva/fisiología , Ruido , Adulto , Audífonos , Percepción del Habla/fisiología , Redes Neurales de la Computación
14.
Int J Occup Saf Ergon ; 30(3): 754-764, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38628029

RESUMEN

Objectives. This study aimed to investigate the effects of separate and concurrent exposure to occupational noise and hand-transmitted vibration (HTV) on auditory and cognitive attention. Methods. The experimental study was conducted with 40 construction workers who were exposed to noise (A-weighted equivalent sound pressure level of 90 dB) and to HTV (10 m/s2 at 31.5 Hz), and concurrent exposure to both for 30 min under simulated work with vibrating equipment used in construction. Cognitive performance aspects were then evaluated from each individual in two pre-exposure and post-exposure settings for each session. Results. The effect sizes of concurrent exposure (HTV + noise) and separate exposure to noise on auditory attention were very close (effect size = 0.648 and 0.626). The largest changes in the difference of response time in both types of attention (selective and divided attention) were related to the concurrent exposure scenario and then exposure to HTV, respectively. The highest effects for the correct response of selective and divided attention are related to concurrent exposure (HTV + noise) and then noise exposure, respectively. Conclusion. The HTV effect during concurrent exposure is hidden in auditory attention, and noise has the main effects. The divided attention was more affected than the selective attention in the different scenarios.


Asunto(s)
Atención , Cognición , Ruido en el Ambiente de Trabajo , Exposición Profesional , Vibración , Humanos , Ruido en el Ambiente de Trabajo/efectos adversos , Atención/fisiología , Exposición Profesional/efectos adversos , Vibración/efectos adversos , Adulto , Masculino , Industria de la Construcción , Tiempo de Reacción
15.
Sci Rep ; 14(1): 8861, 2024 04 17.
Artículo en Inglés | MEDLINE | ID: mdl-38632246

RESUMEN

Attention as a cognition ability plays a crucial role in perception which helps humans to concentrate on specific objects of the environment while discarding others. In this paper, auditory attention detection (AAD) is investigated using different dynamic features extracted from multichannel electroencephalography (EEG) signals when listeners attend to a target speaker in the presence of a competing talker. To this aim, microstate and recurrence quantification analysis are utilized to extract different types of features that reflect changes in the brain state during cognitive tasks. Then, an optimized feature set is determined by employing the processes of significant feature selection based on classification performance. The classifier model is developed by hybrid sequential learning that employs Gated Recurrent Units (GRU) and Convolutional Neural Network (CNN) into a unified framework for accurate attention detection. The proposed AAD method shows that the selected feature set achieves the most discriminative features for the classification process. Also, it yields the best performance as compared with state-of-the-art AAD approaches from the literature in terms of various measures. The current study is the first to validate the use of microstate and recurrence quantification parameters to differentiate auditory attention using reinforcement learning without access to stimuli.


Asunto(s)
Encéfalo , Redes Neurales de la Computación , Humanos , Mapeo Encefálico/métodos , Aprendizaje Automático , Atención , Electroencefalografía/métodos
16.
Indian J Otolaryngol Head Neck Surg ; 76(2): 1716-1723, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38566707

RESUMEN

Making evidence-based policy decisions is challenging when there is a lack of information, especially when deciding provider payment rates for publicly funded health insurance plans. Therefore, the goal of this study was to estimate the cost of a cochlear implant operation in a tertiary care setting in India. We also looked at the patients' out-of-pocket (OOP) expenses for the cochlear implant surgery. From the perspectives of the patients and the healthcare systems, we assessed the financial costs of the cochlear implantation procedure. A bottom-up pricing model was used to assess the cost that the healthcare system would bear for a cochlear implant procedure. Information on all the resources (both capital and ongoing) required to offer cochlear implantation services for hearing loss was gathered over the course of a year. 120 individuals with hearing loss who had cochlear implantation surgery disclosed their out-of-pocket (OOP) costs, which included both direct medical and non-medical expenses. All costs for the budgetary year 2018-2019 were anticipated. The unit health system spent ₹ 151($2), ₹ 578($7.34) and ₹ 37,449($478) on ear exams, audiological evaluations, and cochlear implant surgeries, respectively. Per bed-day in the otolaryngology ward, hospitalization cost ₹ 202($2.6), or ₹ 1211($15.5). The estimated average out-of-pocket cost for a cochlear implant operation was ₹ 682,230($8710). Our research can be used to establish package rates for publicly funded insurance plans in India, plan the growth of public sector hearing care services, and do cost-effectiveness assessments on various hearing care models. Supplementary Information: The online version contains supplementary material available at 10.1007/s12070-023-04389-7.

17.
Int. j. clin. health psychol. (Internet) ; 24(1): [100437], Ene-Mar, 2024. ilus, tab, graf
Artículo en Inglés | IBECS | ID: ibc-230378

RESUMEN

Background: Schizophrenia often occurs in youth, and psychosis risk syndrome (PRS) occurs before the onset of psychosis. Assessing the neuropsychological abnormalities of PRS individuals can help in early identification and active intervention of mental illness. Auditory P300 amplitude defect is an important manifestation of attention processing abnormality in PRS, but it is still unclear whether there are abnormalities in the attention processing of rhythmic compound tone stimuli in PRS individuals, and whether the P300 amplitude induced by these stimuli is specific to PRS individuals and related to their clinical outcomes. Methods: In total, 226 participants, including 122 patients with PRS, 51 patients with emotional disorders (ED), and 53 healthy controls (HC) were assessed. Baseline electroencephalography was recorded during the compound tone oddball task. The event-related potentials (ERPs) induced by rhythmic compound tone stimuli of two frequencies (20-Hz, 40-Hz) were measured. Almost all patients with PRS were followed up for 12 months and reclassified into four groups: PRS-conversion, PRS-symptomatic, PRS-emotional disorder, and PRS-complete remission. The differences in baseline ERPs were compared among the clinical outcome groups. Results: Regardless of the stimulation frequency, the average P300 amplitude were significantly higher in patients with PRS than in those with ED (p = 0.003, d = 0.48) and in HC (p = 0.002, d = 0.44) group. The average P300 amplitude of PRS-conversion group was significantly higher than that of the PRS-complete remission (p = 0.016, d = 0.72) and HC group (p = 0.001, d = 0.76), and the average P300 amplitude of PRS-symptomatic group was significantly higher than that of the HC group (p = 0.006, d = 0.48)...(AU)


Asunto(s)
Humanos , Masculino , Femenino , Adolescente , Esquizofrenia , Psicología Clínica , Salud Mental , Trastornos Mentales , Trastornos Psicóticos , Estudios de Casos y Controles , Electroencefalografía
18.
Elife ; 122024 Mar 12.
Artículo en Inglés | MEDLINE | ID: mdl-38470243

RESUMEN

Preserved communication abilities promote healthy ageing. To this end, the age-typical loss of sensory acuity might in part be compensated for by an individual's preserved attentional neural filtering. Is such a compensatory brain-behaviour link longitudinally stable? Can it predict individual change in listening behaviour? We here show that individual listening behaviour and neural filtering ability follow largely independent developmental trajectories modelling electroencephalographic and behavioural data of N = 105 ageing individuals (39-82 y). First, despite the expected decline in hearing-threshold-derived sensory acuity, listening-task performance proved stable over 2 y. Second, neural filtering and behaviour were correlated only within each separate measurement timepoint (T1, T2). Longitudinally, however, our results raise caution on attention-guided neural filtering metrics as predictors of individual trajectories in listening behaviour: neither neural filtering at T1 nor its 2-year change could predict individual 2-year behavioural change, under a combination of modelling strategies.


Humans are social animals. Communicating with other humans is vital for our social wellbeing, and having strong connections with others has been associated with healthier aging. For most humans, speech is an integral part of communication, but speech comprehension can be challenging in everyday social settings: imagine trying to follow a conversation in a crowded restaurant or decipher an announcement in a busy train station. Noisy environments are particularly difficult to navigate for older individuals, since age-related hearing loss can impact the ability to detect and distinguish speech sounds. Some aging individuals cope better than others with this problem, but the reason why, and how listening success can change over a lifetime, is poorly understood. One of the mechanisms involved in the segregation of speech from other sounds depends on the brain applying a 'neural filter' to auditory signals. The brain does this by aligning the activity of neurons in a part of the brain that deals with sounds, the auditory cortex, with fluctuations in the speech signal of interest. This neural 'speech tracking' can help the brain better encode the speech signals that a person is listening to. Tune and Obleser wanted to know whether the accuracy with which individuals can implement this filtering strategy represents a marker of listening success. Further, the researchers wanted to answer whether differences in the strength of the neural filtering observed between aging listeners could predict how their listening ability would develop, and determine whether these neural changes were connected with changes in people's behaviours. To answer these questions, Tune and Obleser used data collected from a group of healthy middle-aged and older listeners twice, two years apart. They then built mathematical models using these data to investigate how differences between individuals in the brain and in behaviours relate to each other. The researchers found that, across both timepoints, individuals with stronger neural filtering were better at distinguishing speech and listening. However, neural filtering strength measured at the first timepoint was not a good predictor of how well individuals would be able to listen two years later. Indeed, changes at the brain and the behavioural level occurred independently of each other. Tune and Obleser's findings will be relevant to neuroscientists, as well as to psychologists and audiologists whose goal is to understand differences between individuals in terms of listening success. The results suggest that neural filtering guided by attention to speech is an important readout of an individual's attention state. However, the results also caution against explaining listening performance based solely on neural factors, given that listening behaviours and neural filtering follow independent trajectories.


Asunto(s)
Envejecimiento , Longevidad , Adulto , Humanos , Encéfalo , Percepción Auditiva , Benchmarking
19.
Q J Exp Psychol (Hove) ; : 17470218241242260, 2024 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-38485525

RESUMEN

Knowledge of the underlying mechanisms of effortful listening could help to reduce cases of social withdrawal and mitigate fatigue, especially in older adults. However, the relationship between transient effort and longer term fatigue is likely to be more complex than originally thought. Here, we manipulated the presence/absence of monetary reward to examine the role of motivation and mood state in governing changes in perceived effort and fatigue from listening. In an online study, 185 participants were randomly assigned to either a "reward" (n = 91) or "no-reward" (n = 94) group and completed a dichotic listening task along with a series of questionnaires assessing changes over time in perceived effort, mood, and fatigue. Effort ratings were higher overall in the reward group, yet fatigue ratings in that group showed a shallower linear increase over time. Mediation analysis revealed an indirect effect of reward on fatigue ratings via perceived mood state; reward induced a more positive mood state which was associated with reduced fatigue. These results suggest that: (1) listening conditions rated as more "effortful" may be less fatiguing if the effort is deemed worthwhile, and (2) alterations to one's mood state represent a potential mechanism by which fatigue may be elicited during unrewarding listening situations.

20.
Front Neurosci ; 18: 1275560, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38389785

RESUMEN

Background: Tinnitus is strongly associated with an increased risk of cognitive disabilities. The findings of this research will provide valuable support for future investigations aimed at determining the correlation between tinnitus and the risk of cognitive impairments. Objectives: We investigated the potential correlation between tinnitus and the risk of various cognitive impairments, such as dementia, compromised learning attention, anxiety, depression, and insomnia. The study examined this relationship collectively and by categorizing the data based on different age groups. Methods: We compiled data from case-control studies and cohort studies obtained from reputable databases such as PubMed, Cochrane Library, and Embase. To minimize potential bias, two reviewers independently assessed the selected articles. After extracting the data, we calculated the pooled odds ratios (ORs) using a random-effects model. Results: Seventeen relevant studies, comprising an adult population, were included in this analysis. Pooled estimated outcomes revealed a strong association between tinnitus and an elevated risk of dementia-compromised learning, auditory attention, anxiety, depression, and poor sleep quality (P<0.05). Furthermore, the pooled analysis stratified by age demonstrated that patients aged above 60 years, in comparison to those aged 18 to 60 years, exhibited more significant outcomes in relation to the progression of cognitive impairments. Conclusion: Tinnitus has the potential to increase the risk of cognitive impairments. Moreover, geriatric patients aged above 60 shows a higher susceptibility to developing cognitive disabilities compared to their younger counterparts.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA