Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 440
Filtrar
1.
Hear Res ; 453: 109108, 2024 Aug 29.
Artículo en Inglés | MEDLINE | ID: mdl-39244840

RESUMEN

The middle-ear muscle reflex (MEMR) and medial olivocochlear reflex (MOCR) modify peripheral auditory function, which may reduce masking and improve speech-in-noise (SIN) recognition. Previous work and our pilot data suggest that the two reflexes respond differently to static versus dynamic noise elicitors. However, little is known about how the two reflexes work in tandem to contribute to SIN recognition. We hypothesized that SIN recognition would be significantly correlated with the strength of the MEMR and with the strength of the MOCR. Additionally, we hypothesized that SIN recognition would be best when both reflexes were activated. A total of 43 healthy, normal-hearing adults met the inclusion/exclusion criteria (35 females, age range: 19-29 years). MEMR strength was assessed using wideband absorbance. MOCR strength was assessed using transient-evoked otoacoustic emissions. SIN recognition was assessed using a modified version of the QuickSIN. All measurements were made with and without two types of contralateral noise elicitors (steady and pulsed) at two levels (50 and 65 dB SPL). Steady noise was used to primarily elicit the MOCR and pulsed noise was used to elicit both reflexes. Two baseline conditions without a contralateral elicitor were also obtained. Results revealed differences in how the MEMR and MOCR responded to elicitor type and level. Contrary to hypotheses, SIN recognition was not significantly improved in the presence of any contralateral elicitors relative to the baseline conditions. Additionally, there were no significant correlations between MEMR strength and SIN recognition, or between MOCR strength and SIN recognition. MEMR and MOCR strength were significantly correlated for pulsed noise elicitors but not steady noise elicitors. Results suggest no association between SIN recognition and the MEMR or MOCR, at least as measured and analyzed in this study. SIN recognition may have been influenced by factors not accounted for in this study, such as contextual cues, warranting further study.

2.
Hear Res ; 453: 109103, 2024 Aug 16.
Artículo en Inglés | MEDLINE | ID: mdl-39243488

RESUMEN

Over the last decade, multiple studies have shown that hearing-impaired listeners' speech-in-noise reception ability, measured with audibility compensation, is closely associated with performance in spectro-temporal modulation (STM) detection tests. STM tests thus have the potential to provide highly relevant beyond-the-audiogram information in the clinic, but the available STM tests have not been optimized for clinical use in terms of test duration, required equipment, and procedural standardization. The present study introduces a quick-and-simple clinically viable STM test, named the Audible Contrast Threshold (ACT™) test. First, an experimenter-controlled STM measurement paradigm was developed, in which the patient is presented bilaterally with a continuous audibility-corrected noise via headphones and asked to press a pushbutton whenever they hear an STM target sound in the noise. The patient's threshold is established using a Hughson-Westlake tracking procedure with a three-out-of-five criterion and then refined by post-processing the collected data using a logistic function. Different stimulation paradigms were tested in 28 hearing-impaired participants and compared to data previously measured in the same participants with an established STM test paradigm. The best stimulation paradigm showed excellent test-retest reliability and good agreement with the established laboratory version. Second, the best stimulation paradigm with 1-second noise "waves" (windowed noise) was chosen, further optimized with respect to step size and logistic-function fitting, and tested in a population of 25 young normal-hearing participants using various types of transducers to obtain normative data. Based on these normative data, the "normalized Contrast Level" (in dB nCL) scale was defined, where 0 ± 4 dB nCL corresponds to normal performance and elevated dB nCL values indicate the degree of audible contrast loss. Overall, the results of the present study suggest that the ACT test may be considered a reliable, quick-and-simple (and thus clinically viable) test of STM sensitivity. The ACT can be measured directly after the audiogram using the same set up, adding only a few minutes to the process.

3.
Brain ; 2024 Sep 20.
Artículo en Inglés | MEDLINE | ID: mdl-39300826

RESUMEN

Developmental dyslexia is typically associated with difficulties in basic auditory processing and in manipulating speech sounds. However, the neuroanatomical correlates of auditory difficulties in developmental dyslexia (DD) and their contribution to individual clinical phenotypes are still unknown. Recent intracranial electrocorticography findings associated processing of sound amplitude rises and speech sounds with posterior and middle superior temporal gyrus (STG), respectively. We hypothesize that regional STG anatomy will relate to specific auditory abilities in DD, and that auditory processing abilities will relate to behavioral difficulties with speech and reading. One hundred and ten children (78 DD, 32 typically developing, age 7-15 years) completed amplitude rise time and speech in noise discrimination tasks. They also underwent a battery of cognitive tests. Anatomical MRI scans were used to identify regions in which local cortical gyrification complexity correlated with auditory behavior. Behaviorally, amplitude rise time but not speech in noise performance was impaired in DD. Neurally, amplitude rise time and speech in noise performance correlated with gyrification in posterior and middle STG, respectively. Furthermore, amplitude rise time significantly contributed to reading impairments in DD, while speech in noise only explained variance in phonological awareness. Finally, amplitude rise time and speech in noise performance were not correlated, and each task was correlated with distinct neuropsychological measures, emphasizing their unique contributions to DD. Overall, we provide a direct link between the neurodevelopment of the left STG and individual variability in auditory processing abilities in neurotypical and dyslexic populations.

4.
Hum Brain Mapp ; 45(13): e70023, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39268584

RESUMEN

The relationship between speech production and perception is a topic of ongoing debate. Some argue that there is little interaction between the two, while others claim they share representations and processes. One perspective suggests increased recruitment of the speech motor system in demanding listening situations to facilitate perception. However, uncertainties persist regarding the specific regions involved and the listening conditions influencing its engagement. This study used activation likelihood estimation in coordinate-based meta-analyses to investigate the neural overlap between speech production and three speech perception conditions: speech-in-noise, spectrally degraded speech and linguistically complex speech. Neural overlap was observed in the left frontal, insular and temporal regions. Key nodes included the left frontal operculum (FOC), left posterior lateral part of the inferior frontal gyrus (IFG), left planum temporale (PT), and left pre-supplementary motor area (pre-SMA). The left IFG activation was consistently observed during linguistic processing, suggesting sensitivity to the linguistic content of speech. In comparison, the left pre-SMA activation was observed when processing degraded and noisy signals, indicating sensitivity to signal quality. Activations of the left PT and FOC activation were noted in all conditions, with the posterior FOC area overlapping in all conditions. Our meta-analysis reveals context-independent (FOC, PT) and context-dependent (pre-SMA, posterior lateral IFG) regions within the speech motor system during challenging speech perception. These regions could contribute to sensorimotor integration and executive cognitive control for perception and production.


Asunto(s)
Percepción del Habla , Habla , Humanos , Percepción del Habla/fisiología , Habla/fisiología , Mapeo Encefálico , Funciones de Verosimilitud , Corteza Motora/fisiología , Corteza Cerebral/fisiología , Corteza Cerebral/diagnóstico por imagen
5.
Trends Hear ; 28: 23312165241266322, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39267369

RESUMEN

Noise adaptation is the improvement in auditory function as the signal of interest is delayed in the noise. Here, we investigated if noise adaptation occurs in spectral, temporal, and spectrotemporal modulation detection as well as in speech recognition. Eighteen normal-hearing adults participated in the experiments. In the modulation detection tasks, the signal was a 200ms spectrally and/or temporally modulated ripple noise. The spectral modulation rate was two cycles per octave, the temporal modulation rate was 10 Hz, and the spectrotemporal modulations combined these two modulations, which resulted in a downward-moving ripple. A control experiment was performed to determine if the results generalized to upward-moving ripples. In the speech recognition task, the signal consisted of disyllabic words unprocessed or vocoded to maintain only envelope cues. Modulation detection thresholds at 0 dB signal-to-noise ratio and speech reception thresholds were measured in quiet and in white noise (at 60 dB SPL) for noise-signal onset delays of 50 ms (early condition) and 800 ms (late condition). Adaptation was calculated as the threshold difference between the early and late conditions. Adaptation in word recognition was statistically significant for vocoded words (2.1 dB) but not for natural words (0.6 dB). Adaptation was found to be statistically significant in spectral (2.1 dB) and temporal (2.2 dB) modulation detection but not in spectrotemporal modulation detection (downward ripple: 0.0 dB, upward ripple: -0.4 dB). Findings suggest that noise adaptation in speech recognition is unrelated to improvements in the encoding of spectrotemporal modulation cues.


Asunto(s)
Estimulación Acústica , Umbral Auditivo , Ruido , Enmascaramiento Perceptual , Reconocimiento en Psicología , Percepción del Habla , Ruido/efectos adversos , Adaptación Fisiológica/fisiología , Señales (Psicología) , Humanos , Masculino , Femenino , Adulto Joven , Adulto , Prueba del Umbral de Recepción del Habla , Inteligibilidad del Habla , Espectrografía del Sonido
6.
Cereb Cortex ; 34(9)2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39329356

RESUMEN

Evidence suggests that the articulatory motor system contributes to speech perception in a context-dependent manner. This study tested 2 hypotheses using magnetoencephalography: (i) the motor cortex is involved in phonological processing, and (ii) it aids in compensating for speech-in-noise challenges. A total of 32 young adults performed a phonological discrimination task under 3 noise conditions while their brain activity was recorded using magnetoencephalography. We observed simultaneous activation in the left ventral primary motor cortex and bilateral posterior-superior temporal gyrus when participants correctly identified pairs of syllables. This activation was significantly more pronounced for phonologically different than identical syllable pairs. Notably, phonological differences were resolved more quickly in the left ventral primary motor cortex than in the left posterior-superior temporal gyrus. Conversely, the noise level did not modulate the activity in frontal motor regions and the involvement of the left ventral primary motor cortex in phonological discrimination was comparable across all noise conditions. Our results show that the ventral primary motor cortex is crucial for phonological processing but not for compensation in challenging listening conditions. Simultaneous activation of left ventral primary motor cortex and bilateral posterior-superior temporal gyrus supports an interactive model of speech perception, where auditory and motor regions shape perception. The ventral primary motor cortex may be involved in a predictive coding mechanism that influences auditory-phonetic processing.


Asunto(s)
Magnetoencefalografía , Corteza Motora , Fonética , Percepción del Habla , Humanos , Masculino , Femenino , Corteza Motora/fisiología , Adulto Joven , Percepción del Habla/fisiología , Adulto , Lateralidad Funcional/fisiología , Discriminación en Psicología/fisiología , Estimulación Acústica , Mapeo Encefálico , Ruido
7.
Trends Hear ; 28: 23312165241273346, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39195628

RESUMEN

There is broad consensus that listening effort is an important outcome for measuring hearing performance. However, there remains debate on the best ways to measure listening effort. This study sought to measure neural correlates of listening effort using functional near-infrared spectroscopy (fNIRS) in experienced adult hearing aid users. The study evaluated impacts of amplification and signal-to-noise ratio (SNR) on cerebral blood oxygenation, with the expectation that easier listening conditions would be associated with less oxygenation in the prefrontal cortex. Thirty experienced adult hearing aid users repeated sentence-final words from low-context Revised Speech Perception in Noise Test sentences. Participants repeated words at a hard SNR (individual SNR-50) or easy SNR (individual SNR-50 + 10 dB), while wearing hearing aids fit to prescriptive targets or without wearing hearing aids. In addition to assessing listening accuracy and subjective listening effort, prefrontal blood oxygenation was measured using fNIRS. As expected, easier listening conditions (i.e., easy SNR, with hearing aids) led to better listening accuracy, lower subjective listening effort, and lower oxygenation across the entire prefrontal cortex compared to harder listening conditions. Listening accuracy and subjective listening effort were also significant predictors of oxygenation.


Asunto(s)
Audífonos , Espectroscopía Infrarroja Corta , Percepción del Habla , Humanos , Masculino , Femenino , Percepción del Habla/fisiología , Anciano , Persona de Mediana Edad , Relación Señal-Ruido , Estimulación Acústica/métodos , Corteza Prefrontal/fisiología , Personas con Deficiencia Auditiva/psicología , Personas con Deficiencia Auditiva/rehabilitación , Ruido/efectos adversos , Corrección de Deficiencia Auditiva/instrumentación , Corrección de Deficiencia Auditiva/métodos , Adulto , Anciano de 80 o más Años , Audición/fisiología , Circulación Cerebrovascular/fisiología , Umbral Auditivo/fisiología , Inteligibilidad del Habla/fisiología
8.
eNeuro ; 11(8)2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39095091

RESUMEN

Adults heard recordings of two spatially separated speakers reading newspaper and magazine articles. They were asked to listen to one of them and ignore the other, and EEG was recorded to assess their neural processing. Machine learning extracted neural sources that tracked the target and distractor speakers at three levels: the acoustic envelope of speech (delta- and theta-band modulations), lexical frequency for individual words, and the contextual predictability of individual words estimated by GPT-4 and earlier lexical models. To provide a broader view of speech perception, half of the subjects completed a simultaneous visual task, and the listeners included both native and non-native English speakers. Distinct neural components were extracted for these levels of auditory and lexical processing, demonstrating that native English speakers had greater target-distractor separation compared with non-native English speakers on most measures, and that lexical processing was reduced by the visual task. Moreover, there was a novel interaction of lexical predictability and frequency with auditory processing; acoustic tracking was stronger for lexically harder words, suggesting that people listened harder to the acoustics when needed for lexical selection. This demonstrates that speech perception is not simply a feedforward process from acoustic processing to the lexicon. Rather, the adaptable context-sensitive processing long known to occur at a lexical level has broader consequences for perception, coupling with the acoustic tracking of individual speakers in noise.


Asunto(s)
Electroencefalografía , Ruido , Percepción del Habla , Humanos , Percepción del Habla/fisiología , Femenino , Masculino , Adulto , Adulto Joven , Electroencefalografía/métodos , Acústica del Lenguaje , Lenguaje , Aprendizaje Automático
10.
Brain Res ; 1844: 149166, 2024 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-39151718

RESUMEN

Acoustic information in speech changes continuously, yet listeners form discrete perceptual categories to ease the demands of perception. Being a more continuous/gradient as opposed to a more discrete/categorical listener may be further advantageous for understanding speech in noise by increasing perceptual flexibility and resolving ambiguity. The degree to which a listener's responses to a continuum of speech sounds are categorical versus continuous can be quantified using visual analog scaling (VAS) during speech labeling tasks. Here, we recorded event-related brain potentials (ERPs) to vowels along an acoustic-phonetic continuum (/u/ to /a/) while listeners categorized phonemes in both clean and noise conditions. Behavior was assessed using standard two alternative forced choice (2AFC) and VAS paradigms to evaluate categorization under task structures that promote discrete vs. continuous hearing, respectively. Behaviorally, identification curves were steeper under 2AFC vs. VAS categorization but were relatively immune to noise, suggesting robust access to abstract, phonetic categories even under signal degradation. Behavioral slopes were correlated with listeners' QuickSIN scores; shallower slopes corresponded with better speech in noise performance, suggesting a perceptual advantage to noise degraded speech comprehension conferred by a more gradient listening strategy. At the neural level, P2 amplitudes and latencies of the ERPs were modulated by task and noise; VAS responses were larger and showed greater noise-related latency delays than 2AFC responses. More gradient responders had smaller shifts in ERP latency with noise, suggesting their neural encoding of speech was more resilient to noise degradation. Interestingly, source-resolved ERPs showed that more gradient listening was also correlated with stronger neural responses in left superior temporal gyrus. Our results demonstrate that listening strategy modulates the categorical organization of speech and behavioral success, with more continuous/gradient listening being advantageous to sentential speech in noise perception.


Asunto(s)
Electroencefalografía , Ruido , Percepción del Habla , Humanos , Percepción del Habla/fisiología , Masculino , Femenino , Adulto Joven , Adulto , Electroencefalografía/métodos , Estimulación Acústica/métodos , Potenciales Evocados/fisiología , Encéfalo/fisiología , Potenciales Evocados Auditivos/fisiología , Fonética
11.
Trends Hear ; 28: 23312165241275895, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39212078

RESUMEN

Auditory training can lead to notable enhancements in specific tasks, but whether these improvements generalize to untrained tasks like speech-in-noise (SIN) recognition remains uncertain. This study examined how training conditions affect generalization. Fifty-five young adults were divided into "Trained-in-Quiet" (n = 15), "Trained-in-Noise" (n = 20), and "Control" (n = 20) groups. Participants completed two sessions. The first session involved an assessment of SIN recognition and voice discrimination (VD) with word or sentence stimuli, employing combined fundamental frequency (F0) + formant frequencies voice cues. Subsequently, only the trained groups proceeded to an interleaved training phase, encompassing six VD blocks with sentence stimuli, utilizing either F0-only or formant-only cues. The second session replicated the interleaved training for the trained groups, followed by a second assessment conducted by all three groups, identical to the first session. Results showed significant improvements in the trained task regardless of training conditions. However, VD training with a single cue did not enhance VD with both cues beyond control group improvements, suggesting limited generalization. Notably, the Trained-in-Noise group exhibited the most significant SIN recognition improvements posttraining, implying generalization across tasks that share similar acoustic conditions. Overall, findings suggest training conditions impact generalization by influencing processing levels associated with the trained task. Training in noisy conditions may prompt higher auditory and/or cognitive processing than training in quiet, potentially extending skills to tasks involving challenging listening conditions, such as SIN recognition. These insights hold significant theoretical and clinical implications, potentially advancing the development of effective auditory training protocols.


Asunto(s)
Estimulación Acústica , Señales (Psicología) , Generalización Psicológica , Ruido , Percepción del Habla , Humanos , Masculino , Femenino , Adulto Joven , Percepción del Habla/fisiología , Ruido/efectos adversos , Adulto , Reconocimiento en Psicología , Enmascaramiento Perceptual , Adolescente , Acústica del Lenguaje , Calidad de la Voz , Aprendizaje Discriminativo/fisiología , Voz/fisiología
12.
Neuropsychologia ; 203: 108968, 2024 Oct 10.
Artículo en Inglés | MEDLINE | ID: mdl-39117064

RESUMEN

We examined the neural correlates underlying the semantic processing of native- and nonnative-accented sentences, presented in quiet or embedded in multi-talker noise. Implementing a semantic violation paradigm, 36 English monolingual young adults listened to American-accented (native) and Chinese-accented (nonnative) English sentences with or without semantic anomalies, presented in quiet or embedded in multi-talker noise, while EEG was recorded. After hearing each sentence, participants verbally repeated the sentence, which was coded and scored as an offline comprehension accuracy measure. In line with earlier behavioral studies, the negative impact of background noise on sentence repetition accuracy was higher for nonnative-accented than for native-accented sentences. At the neural level, the N400 effect for semantic anomaly was larger for native-accented than for nonnative-accented sentences, and was also larger for sentences presented in quiet than in noise, indicating impaired lexical-semantic access when listening to nonnative-accented speech or sentences embedded in noise. No semantic N400 effect was observed for nonnative-accented sentences presented in noise. Furthermore, the frequency of neural oscillations in the alpha frequency band (an index of online cognitive listening effort) was higher when listening to sentences in noise versus in quiet, but no difference was observed across the accent conditions. Semantic anomalies presented in background noise also elicited higher theta activity, whereas processing nonnative-accented anomalies was associated with decreased theta activity. Taken together, we found that listening to nonnative accents or background noise is associated with processing challenges during online semantic access, leading to decreased comprehension accuracy. However, the underlying cognitive mechanism (e.g., associated listening efforts) might manifest differently across accented speech processing and speech in noise processing.


Asunto(s)
Electroencefalografía , Ruido , Semántica , Percepción del Habla , Humanos , Percepción del Habla/fisiología , Femenino , Masculino , Adulto Joven , Adulto , Comprensión/fisiología , Potenciales Evocados/fisiología , Encéfalo/fisiología , Estimulación Acústica
13.
Conscious Cogn ; 124: 103747, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39213729

RESUMEN

Reporting discomfort when noise affects listening experience suggests that listeners may be aware, at least to some extent, of adverse environmental conditions and their impact on listening experience. This involves monitoring internal states (effort and confidence). Here we quantified continuous self-report indices that track one's own internal states and investigated age-related differences in this ability. We instructed two groups of young and older adults to continuously report their confidence and effort while listening to stories in fluctuating noise. Using cross-correlation analyses between the time series of fluctuating noise and those of perceived effort or confidence, we showed that (1) participants modified their assessment of effort and confidence based on variations in the noise, with a 4 s lag; (2) there were no differences between the groups. These findings imply extending this method to other areas, expanding the definition of metacognition, and highlighting the value of this ability for older adults.


Asunto(s)
Ruido , Percepción del Habla , Humanos , Masculino , Anciano , Femenino , Adulto , Adulto Joven , Percepción del Habla/fisiología , Persona de Mediana Edad , Metacognición/fisiología , Envejecimiento/fisiología , Percepción Auditiva/fisiología , Autoimagen , Anciano de 80 o más Años , Factores de Edad
14.
Q J Exp Psychol (Hove) ; : 17470218241278649, 2024 Sep 20.
Artículo en Inglés | MEDLINE | ID: mdl-39164830

RESUMEN

Seeing the visual articulatory movements of a speaker, while hearing their voice, helps with understanding what is said. This multisensory enhancement is particularly evident in noisy listening conditions. Multisensory enhancement also occurs even in auditory-only conditions: auditory-only speech and voice-identity recognition are superior for speakers previously learned with their face, compared to control learning; an effect termed the "face-benefit." Whether the face-benefit can assist in maintaining robust perception in increasingly noisy listening conditions, similar to concurrent multisensory input, is unknown. Here, in two behavioural experiments, we examined this hypothesis. In each experiment, participants learned a series of speakers' voices together with their dynamic face or control image. Following learning, participants listened to auditory-only sentences spoken by the same speakers and recognised the content of the sentences (speech recognition, Experiment 1) or the voice-identity of the speaker (Experiment 2) in increasing levels of auditory noise. For speech recognition, we observed that 14 of 30 participants (47%) showed a face-benefit. 19 of 25 participants (76%) showed a face-benefit for voice-identity recognition. For those participants who demonstrated a face-benefit, the face-benefit increased with auditory noise levels. Taken together, the results support an audio-visual model of auditory communication and suggest that the brain can develop a flexible system in which learned facial characteristics are used to deal with varying auditory uncertainty.

15.
Neuropsychologia ; 202: 108960, 2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39032629

RESUMEN

Congenital amusia is a neurodevelopmental disorder characterized by deficits of music perception and production, which are related to altered pitch processing. The present study used a wide variety of tasks to test potential patterns of processing impairment in individuals with congenital amusia (N = 18) in comparison to matched controls (N = 19), notably classical pitch processing tests (i.e., pitch change detection, pitch direction of change identification, and pitch short-term memory tasks) together with tasks assessing other aspects of pitch-related auditory cognition, such as emotion recognition in speech, sound segregation in tone sequences, and speech-in-noise perception. Additional behavioral measures were also collected, including text reading/copying tests, visual control tasks, and a subjective assessment of hearing abilities. As expected, amusics' performance was impaired for the three pitch-specific tasks compared to controls. This deficit of pitch perception had a self-perceived impact on amusics' quality of hearing. Moreover, participants with amusia were impaired in emotion recognition in vowels compared to controls, but no group difference was observed for emotion recognition in sentences, replicating previous data. Despite pitch processing deficits, participants with amusia did not differ from controls in sound segregation and speech-in-noise perception. Text reading and visual control tests did not reveal any impairments in participants with amusia compared to controls. However, the copying test revealed more numerous eye-movements and a smaller memory span. These results allow us to refine the pattern of pitch processing and memory deficits in congenital amusia, thus contributing further to understand pitch-related auditory cognition. Together with previous reports suggesting a comorbidity between congenital amusia and dyslexia, the findings call for further investigation of language-related abilities in this disorder even in the absence of neurodevelopmental language disorder diagnosis.


Asunto(s)
Trastornos de la Percepción Auditiva , Música , Percepción de la Altura Tonal , Percepción del Habla , Humanos , Trastornos de la Percepción Auditiva/fisiopatología , Femenino , Masculino , Percepción de la Altura Tonal/fisiología , Adulto , Percepción del Habla/fisiología , Adulto Joven , Persona de Mediana Edad , Memoria a Corto Plazo/fisiología , Emociones/fisiología
16.
Front Hum Neurosci ; 18: 1406916, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38974481

RESUMEN

Background: For adults with auditory processing disorder (APD), listening and communicating can be difficult, potentially leading to social isolation, depression, employment difficulties and certainly reducing the quality of life. Despite existing practice guidelines suggesting treatments, the efficacy of these interventions remains uncertain due to a lack of comprehensive reviews. This systematic review and meta-analysis aim to establish current evidence on the effectiveness of interventions for APD in adults, addressing the urgent need for clarity in the field. Methods: Following the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines, we conducted a systematic search across MEDLINE (Ovid), Embase (Ovid), Web of Science and Scopus, focusing on intervention studies involving adults with APD. Studies that met the inclusion criteria were grouped according to intervention with a meta-analysis only conducted where intervention, study design and outcome measure were comparable. Results: Out of 1,618 screened records, 13 studies were included, covering auditory training (AT), low-gain hearing aids (LGHA), and personal remote microphone systems (PRMS). Our analysis revealed: AT, Mixed results with some improvements in speech intelligibility and listening ability, indicating potential benefits but highlighting the need for standardized protocols; LGHA, The included studies demonstrated significant improvements in monaural low redundancy speech testing (p < 0.05), suggesting LGHA could enhance speech perception in noisy environments. However, limitations include small sample sizes and potential biases in study design. PRMS, Demonstrated the most consistent evidence of benefit, significantly improving speech testing results, with no additional benefit from combining PRMS with other interventions. Discussion: PRMS presents the most evidence-supported intervention for adults with APD, although further high-quality research is crucial for all intervention types. The establishment and implementation of standardized intervention protocols alongside rigorously validated outcome measures will enable a more evidence-based approach to managing APD in adults.

17.
Laryngoscope ; 2024 Jul 31.
Artículo en Inglés | MEDLINE | ID: mdl-39082623

RESUMEN

OBJECTIVES: It was aimed at assessing the connection between tinnitus and central auditory dysfunction using both central auditory tests (CATs) and diffusion tensor imaging (DTI) for brain regions that are crucial for central auditory processing. METHODS: This prospective case-control study included 15 patients with persistent tinnitus and 20 healthy volunteers as controls. They underwent CATs for memory, attention, and DTI. The Tinnitus Handicap Inventory (THI) Questionnaire was applied as well. From several brain regions, the values of mean diffusivity (MD) and fractional anisotropy (FA) were determined. RESULTS: Comparing both groups, the tinnitus group showed statistically worse values as regards the CATs (memory for content, sequence memory, speech perception in noise (SPIN) at different signal-to-noise ratios, "SNRs") compared with the control group. As regards DTI, the tinnitus group showed decreased FA in several brain areas, including the cingulum, prefrontal-cortex (PFC), insula, and hippocampus. Furthermore, the tinnitus group showed significantly higher MD in the cingulum, BA-46, and amygdala compared with the control group. FA values of BA-46 were positively correlated with the SPIN-SNR-10 scores. Also, FA values of the middle cingulum were positively correlated with SPIN-SNRzero scores. MD values at BA-46 were negatively correlated with SPIN-SNR-10. THI scores were negatively correlated with FA at BA-46; however, they were positively correlated with MD at the amygdala. CONCLUSIONS: Central auditory dysfunction may be linked to the underlying neurophysiological changes in chronic tinnitus. LEVEL OF EVIDENCE: Level 2 Laryngoscope, 2024.

18.
Brain Topogr ; 37(6): 1139-1157, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39042322

RESUMEN

Functional near-infrared spectroscopy (fNIRS), a non-invasive optical neuroimaging technique that is portable and acoustically silent, has become a promising tool for evaluating auditory brain functions in hearing-vulnerable individuals. This study, for the first time, used fNIRS to evaluate neuroplasticity of speech-in-noise processing in older adults. Ten older adults, most of whom had moderate-to-mild hearing loss, participated in a 4-week speech-in-noise training. Their speech-in-noise performances and fNIRS brain responses to speech (auditory sentences in noise), non-speech (spectrally-rotated speech in noise) and visual (flashing chequerboards) stimuli were evaluated pre- (T0) and post-training (immediately after training, T1; and after a 4-week retention, T2). Behaviourally, speech-in-noise performances were improved after retention (T2 vs. T0) but not immediately after training (T1 vs. T0). Neurally, we intriguingly found brain responses to speech vs. non-speech decreased significantly in the left auditory cortex after retention (T2 vs. T0 and T2 vs. T1) for which we interpret as suppressed processing of background noise during speech listening alongside the significant behavioural improvements. Meanwhile, functional connectivity within and between multiple regions of temporal, parietal and frontal lobes was significantly enhanced in the speech condition after retention (T2 vs. T0). We also found neural changes before the emergence of significant behavioural improvements. Compared to pre-training, responses to speech vs. non-speech in the left frontal/prefrontal cortex were decreased significantly both immediately after training (T1 vs. T0) and retention (T2 vs. T0), reflecting possible alleviation of listening efforts. Finally, connectivity was significantly decreased between auditory and higher-level non-auditory (parietal and frontal) cortices in response to visual stimuli immediately after training (T1 vs. T0), indicating decreased cross-modal takeover of speech-related regions during visual processing. The results thus showed that neuroplasticity can be observed not only at the same time with, but also before, behavioural changes in speech-in-noise perception. To our knowledge, this is the first fNIRS study to evaluate speech-based auditory neuroplasticity in older adults. It thus provides important implications for current research by illustrating the promises of detecting neuroplasticity using fNIRS in hearing-vulnerable individuals.


Asunto(s)
Plasticidad Neuronal , Ruido , Espectroscopía Infrarroja Corta , Percepción del Habla , Humanos , Espectroscopía Infrarroja Corta/métodos , Masculino , Femenino , Plasticidad Neuronal/fisiología , Anciano , Percepción del Habla/fisiología , Estimulación Acústica/métodos , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Persona de Mediana Edad , Mapeo Encefálico/métodos , Corteza Auditiva/fisiología , Corteza Auditiva/diagnóstico por imagen
19.
Hear Res ; 451: 109081, 2024 09 15.
Artículo en Inglés | MEDLINE | ID: mdl-39004015

RESUMEN

Speech-in-noise (SIN) perception is a fundamental ability that declines with aging, as does general cognition. We assess whether auditory cognitive ability, in particular short-term memory for sound features, contributes to both. We examined how auditory memory for fundamental sound features, the carrier frequency and amplitude modulation rate of modulated white noise, contributes to SIN perception. We assessed SIN in 153 healthy participants with varying degrees of hearing loss using measures that require single-digit perception (the Digits-in-Noise, DIN) and sentence perception (Speech-in-Babble, SIB). Independent variables were auditory memory and a range of other factors including the Pure Tone Audiogram (PTA), a measure of dichotic pitch-in-noise perception (Huggins pitch), and demographic variables including age and sex. Multiple linear regression models were compared using Bayesian Model Comparison. The best predictor model for DIN included PTA and Huggins pitch (r2 = 0.32, p < 0.001), whereas the model for SIB included the addition of auditory memory for sound features (r2 = 0.24, p < 0.001). Further analysis demonstrated that auditory memory also explained a significant portion of the variance (28 %) in scores for a screening cognitive test for dementia. Auditory memory for non-speech sounds may therefore provide an important predictor of both SIN and cognitive ability.


Asunto(s)
Estimulación Acústica , Cognición , Memoria a Corto Plazo , Ruido , Enmascaramiento Perceptual , Percepción del Habla , Humanos , Femenino , Masculino , Ruido/efectos adversos , Persona de Mediana Edad , Adulto , Anciano , Adulto Joven , Percepción de la Altura Tonal , Teorema de Bayes , Anciano de 80 o más Años , Audiometría de Tonos Puros , Audición , Umbral Auditivo , Pruebas de Audición Dicótica
20.
J Clin Med ; 13(14)2024 Jul 10.
Artículo en Inglés | MEDLINE | ID: mdl-39064066

RESUMEN

Objectives: Investigate factors contributing to the effective management of age-related hearing loss (ARHL) rehabilitation. Methods: A systematic review was conducted following PRISMA guidelines. The protocol was registered in PROSPERO (CRD42022374811). Articles were identified through systematic searches in the Scopus, PubMed, Web of Science, and Cochrane databases in May 2024. Only articles published between January 2005 and May 2024 were included. Studies were assessed for eligibility by two independent researchers and evaluated using the Crowe Critical Appraisal Tool v1.4 (CCAT). Results: Of the 278 articles identified, 54 were included. Three factors explain effective HA use. First, hearing aid signal processing, with directional microphones and noise reduction, improves user comfort and understanding regarding noise. Second, there is hearing aid fitting, with the NAL prescription rules as the gold standard, and bilateral, high-level HA performance for spatial localization and noise comprehension. Third, there is a patient-centered approach, using patient-related outcome measures (PROMs), questionnaires, counseling, and regular follow-up to involve patients in their therapeutic rehabilitation. Conclusions: Reaching a consensus on acoustic parameters is challenging due to variability in audiological results. Involving patients in their rehabilitation, addressing their needs and expectations, and offering individualized care are crucial.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA