Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
J Speech Lang Hear Res ; 66(10): 4066-4082, 2023 10 04.
Artículo en Inglés | MEDLINE | ID: mdl-37672797

RESUMEN

PURPOSE: This study examined the extent to which acoustic, linguistic, and cognitive task demands interactively impact listening effort. METHOD: Using a dual-task paradigm, on each trial, participants were instructed to perform either a single task or two tasks. In the primary word recognition task, participants repeated Northwestern University Auditory Test No. 6 words presented in speech-shaped noise at either an easier or a harder signal-to-noise ratio (SNR). The words varied in how commonly they occur in the English language (lexical frequency). In the secondary visual task, participants were instructed to press a specific key as soon as a number appeared on screen (simpler task) or one of two keys to indicate whether the visualized number was even or odd (more complex task). RESULTS: Manipulation checks revealed that key assumptions of the dual-task design were met. A significant three-way interaction was observed, such that the expected effect of SNR on effort was only observable for words with lower lexical frequency and only when multitasking demands were relatively simpler. CONCLUSIONS: This work reveals that variability across speech stimuli can influence the sensitivity of the dual-task paradigm for detecting changes in listening effort. In line with previous work, the results of this study also suggest that higher cognitive demands may limit the ability to detect expected effects of SNR on measures of effort. With implications for real-world listening, these findings highlight that even relatively minor changes in lexical and multitasking demands can alter the effort devoted to listening in noise.


Asunto(s)
Percepción del Habla , Humanos , Esfuerzo de Escucha , Ruido , Pruebas Auditivas , Acústica
2.
J Neurophysiol ; 129(6): 1359-1377, 2023 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-37096924

RESUMEN

Understanding speech in a noisy environment is crucial in day-to-day interactions and yet becomes more challenging with age, even for healthy aging. Age-related changes in the neural mechanisms that enable speech-in-noise listening have been investigated previously; however, the extent to which age affects the timing and fidelity of encoding of target and interfering speech streams is not well understood. Using magnetoencephalography (MEG), we investigated how continuous speech is represented in auditory cortex in the presence of interfering speech in younger and older adults. Cortical representations were obtained from neural responses that time-locked to the speech envelopes with speech envelope reconstruction and temporal response functions (TRFs). TRFs showed three prominent peaks corresponding to auditory cortical processing stages: early (∼50 ms), middle (∼100 ms), and late (∼200 ms). Older adults showed exaggerated speech envelope representations compared with younger adults. Temporal analysis revealed both that the age-related exaggeration starts as early as ∼50 ms and that older adults needed a substantially longer integration time window to achieve their better reconstruction of the speech envelope. As expected, with increased speech masking envelope reconstruction for the attended talker decreased and all three TRF peaks were delayed, with aging contributing additionally to the reduction. Interestingly, for older adults the late peak was delayed, suggesting that this late peak may receive contributions from multiple sources. Together these results suggest that there are several mechanisms at play compensating for age-related temporal processing deficits at several stages but which are not able to fully reestablish unimpaired speech perception.NEW & NOTEWORTHY We observed age-related changes in cortical temporal processing of continuous speech that may be related to older adults' difficulty in understanding speech in noise. These changes occur in both timing and strength of the speech representations at different cortical processing stages and depend on both noise condition and selective attention. Critically, their dependence on noise condition changes dramatically among the early, middle, and late cortical processing stages, underscoring how aging differentially affects these stages.


Asunto(s)
Percepción del Habla , Habla , Habla/fisiología , Percepción Auditiva , Ruido , Percepción del Habla/fisiología , Estimulación Acústica/métodos
3.
J Acoust Soc Am ; 153(1): 286, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36732241

RESUMEN

Speech recognition in noisy environments can be challenging and requires listeners to accurately segregate a target speaker from irrelevant background noise. Stochastic figure-ground (SFG) tasks in which temporally coherent inharmonic pure-tones must be identified from a background have been used to probe the non-linguistic auditory stream segregation processes important for speech-in-noise processing. However, little is known about the relationship between performance on SFG tasks and speech-in-noise tasks nor the individual differences that may modulate such relationships. In this study, 37 younger normal-hearing adults performed an SFG task with target figure chords consisting of four, six, eight, or ten temporally coherent tones amongst a background of randomly varying tones. Stimuli were designed to be spectrally and temporally flat. An increased number of temporally coherent tones resulted in higher accuracy and faster reaction times (RTs). For ten target tones, faster RTs were associated with better scores on the Quick Speech-in-Noise task. Individual differences in working memory capacity and self-reported musicianship further modulated these relationships. Overall, results demonstrate that the SFG task could serve as an assessment of auditory stream segregation accuracy and RT that is sensitive to individual differences in cognitive and auditory abilities, even among younger normal-hearing adults.


Asunto(s)
Memoria a Corto Plazo , Percepción del Habla , Adulto , Humanos , Habla , Individualidad , Audiometría de Tonos Puros
4.
Am J Audiol ; 32(3S): 694-705, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36796026

RESUMEN

PURPOSE: The objectives of this study were to (a) describe normative ranges-expressed as reference intervals (RIs)-for vestibular and balance function tests in a cohort of Service Members and Veterans (SMVs) and (b) to describe the interrater reliability of these tests. METHOD: As part of the Defense and Veterans Brain Injury Center (DVBIC)/Traumatic Brain Injury Center of Excellence 15-year Longitudinal Traumatic Brain Injury (TBI) Study, participants completed the following: vestibulo-ocular reflex suppression, visual-vestibular enhancement, subjective visual vertical, subjective visual horizontal, sinusoidal harmonic acceleration, the computerized rotational head impulse test (crHIT), and the sensory organization test. RIs were calculated using nonparametric methods and interrater reliability was assessed using intraclass correlation coefficients between three audiologists who independently reviewed and cleaned the data. RESULTS: Reference populations for each outcome measure comprised 40 to 72 individuals, 19 to 61 years of age, who served either as noninjured controls (NIC) or injured controls (IC) in the 15-year study; none had a history of TBI or blast exposure. A subset of 15 SMVs from the NIC, IC, and TBI groups were included in the interrater reliability calculations. RIs are reported for 27 outcome measures from the seven rotational vestibular and balance tests. Interrater reliability was considered excellent for all tests except the crHIT, which was found to have good interrater reliability. CONCLUSION: This study provides clinicians and scientists with important information regarding normative ranges and interrater reliability for rotational vestibular and balance tests in SMVs.


Asunto(s)
Lesiones Traumáticas del Encéfalo , Lesiones Encefálicas , Veteranos , Humanos , Reproducibilidad de los Resultados , Reflejo Vestibuloocular , Lesiones Traumáticas del Encéfalo/diagnóstico
5.
Neuroimage ; 260: 119496, 2022 10 15.
Artículo en Inglés | MEDLINE | ID: mdl-35870697

RESUMEN

Identifying the directed connectivity that underlie networked activity between different cortical areas is critical for understanding the neural mechanisms behind sensory processing. Granger causality (GC) is widely used for this purpose in functional magnetic resonance imaging analysis, but there the temporal resolution is low, making it difficult to capture the millisecond-scale interactions underlying sensory processing. Magnetoencephalography (MEG) has millisecond resolution, but only provides low-dimensional sensor-level linear mixtures of neural sources, which makes GC inference challenging. Conventional methods proceed in two stages: First, cortical sources are estimated from MEG using a source localization technique, followed by GC inference among the estimated sources. However, the spatiotemporal biases in estimating sources propagate into the subsequent GC analysis stage, may result in both false alarms and missing true GC links. Here, we introduce the Network Localized Granger Causality (NLGC) inference paradigm, which models the source dynamics as latent sparse multivariate autoregressive processes and estimates their parameters directly from the MEG measurements, integrated with source localization, and employs the resulting parameter estimates to produce a precise statistical characterization of the detected GC links. We offer several theoretical and algorithmic innovations within NLGC and further examine its utility via comprehensive simulations and application to MEG data from an auditory task involving tone processing from both younger and older participants. Our simulation studies reveal that NLGC is markedly robust with respect to model mismatch, network size, and low signal-to-noise ratio, whereas the conventional two-stage methods result in high false alarms and mis-detections. We also demonstrate the advantages of NLGC in revealing the cortical network-level characterization of neural activity during tone processing and resting state by delineating task- and age-related connectivity changes.


Asunto(s)
Imagen por Resonancia Magnética , Magnetoencefalografía , Algoritmos , Encéfalo/diagnóstico por imagen , Simulación por Computador , Humanos , Imagen por Resonancia Magnética/métodos , Magnetoencefalografía/métodos
6.
J Acoust Soc Am ; 151(6): 3866, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35778214

RESUMEN

Although the behavioral pure-tone threshold audiogram is considered the gold standard for quantifying hearing loss, assessment of speech understanding, especially in noise, is more relevant to quality of life but is only partly related to the audiogram. Metrics of speech understanding in noise are therefore an attractive target for assessing hearing over time. However, speech-in-noise assessments have more potential sources of variability than pure-tone threshold measures, making it a challenge to obtain results reliable enough to detect small changes in performance. This review examines the benefits and limitations of speech-understanding metrics and their application to longitudinal hearing assessment, and identifies potential sources of variability, including learning effects, differences in item difficulty, and between- and within-individual variations in effort and motivation. We conclude by recommending the integration of non-speech auditory tests, which provide information about aspects of auditory health that have reduced variability and fewer central influences than speech tests, in parallel with the traditional audiogram and speech-based assessments.


Asunto(s)
Pruebas Auditivas , Calidad de Vida , Umbral Auditivo , Audición , Ruido/efectos adversos
7.
J Cogn Neurosci ; 34(1): 127-152, 2021 12 06.
Artículo en Inglés | MEDLINE | ID: mdl-34673939

RESUMEN

Difficulty perceiving phonological contrasts in a second language (L2) can impede initial L2 lexical learning. Such is the case for English speakers learning tonal languages, like Mandarin Chinese. Given the hypothesized role of reduced neuroplasticity in adulthood limiting L2 phonological perception, the current study examined whether transcutaneous auricular vagus nerve stimulation (taVNS), a relatively new neuromodulatory technique, can facilitate L2 lexical learning for English speakers learning Mandarin Chinese over 2 days. Using a double-blind design, one group of participants received 10 min of continuous priming taVNS before lexical training and testing each day, a second group received 500 msec of peristimulus (peristim) taVNS preceding each to-be-learned item in the same tasks, and a third group received passive sham stimulation. Results of the lexical recognition test administered at the end of each day revealed evidence of learning for all groups, but a higher likelihood of accuracy across days for the peristim group and a greater improvement in response time between days for the priming group. Analyses of N400 ERP components elicited during the same tasks indicate behavioral advantages for both taVNS groups coincided with stronger lexico-semantic encoding for target words. Comparison of these findings to pupillometry results for the same study reported in Pandza, N. B., Phillips, I., Karuzis, V. P., O'Rourke, P., and Kuchinsky, S. E. (Neurostimulation and pupillometry: New directions for learning and research in applied linguistics. Annual Review of Applied Linguistics, 40, 56-77, 2020) suggest that positive effects of priming taVNS (but not peristim taVNS) on lexico-semantic encoding are related to sustained attentional effort.


Asunto(s)
Lenguaje , Estimulación del Nervio Vago , Adulto , Electroencefalografía , Potenciales Evocados , Femenino , Humanos , Masculino , Semántica
8.
J Acoust Soc Am ; 150(2): 920, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34470337

RESUMEN

One potential benefit of bilateral cochlear implants is reduced listening effort in speech-on-speech masking situations. However, the symmetry of the input across ears, possibly related to spectral resolution, could impact binaural benefits. Fifteen young adults with normal hearing performed digit recall with target and interfering digits presented to separate ears and attention directed to the target ear. Recall accuracy and pupil size over time (used as an index of listening effort) were measured for unprocessed, 16-channel vocoded, and 4-channel vocoded digits. Recall accuracy was significantly lower for dichotic (with interfering digits) than for monotic listening. Dichotic recall accuracy was highest when the target was less degraded and the interferer was more degraded. With matched target and interferer spectral resolution, pupil dilation was lower with more degradation. Pupil dilation grew more shallowly over time when the interferer had more degradation. Overall, interferer spectral resolution more strongly affected listening effort than target spectral resolution. These results suggest that interfering speech both lowers performance and increases listening effort, and that the relative spectral resolution of target and interferer affect the listening experience. Ignoring a clearer interferer is more effortful.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Percepción Auditiva , Humanos , Habla , Adulto Joven
9.
Trends Hear ; 25: 23312165211013256, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34024219

RESUMEN

The measurement of pupil dilation has become a common way to assess listening effort. Pupillometry data are subject to artifacts, requiring highly contaminated data to be discarded from analysis. It is unknown how trial exclusion criteria impact experimental results. The present study examined the effect of a common exclusion criterion, percentage of blinks, on speech intelligibility and pupil dilation measures in 9 participants with single-sided deafness (SSD) and 20 participants with normal hearing. Participants listened to and repeated sentences in quiet or with speech maskers. Pupillometry trials were processed using three levels of blink exclusion criteria: 15%, 30%, and 45%. These percentages reflect a threshold for missing data points in a trial, where trials that exceed the threshold are excluded from analysis. Results indicated that pupil dilation was significantly greater and intelligibility was significantly lower in the masker compared with the quiet condition for both groups. Across-group comparisons revealed that speech intelligibility in the SSD group decreased significantly more than the normal hearing group from quiet to masker conditions, but the change in pupil dilation was similar for both groups. There was no effect of blink criteria on speech intelligibility or pupil dilation results for either group. However, the total percentage of blinks in the masker condition was significantly greater than in the quiet condition for the SSD group, which is consistent with previous studies that have found a relationship between blinking and task difficulty. This association should be carefully considered in future experiments using pupillometry to gauge listening effort.


Asunto(s)
Sordera , Percepción del Habla , Análisis de Datos , Audición , Humanos , Ruido , Pupila
10.
Neuroimage ; 222: 117291, 2020 11 15.
Artículo en Inglés | MEDLINE | ID: mdl-32835821

RESUMEN

Neural processing along the ascending auditory pathway is often associated with a progressive reduction in characteristic processing rates. For instance, the well-known frequency-following response (FFR) of the auditory midbrain, as measured with electroencephalography (EEG), is dominated by frequencies from ∼100 Hz to several hundred Hz, phase-locking to the acoustic stimulus at those frequencies. In contrast, cortical responses, whether measured by EEG or magnetoencephalography (MEG), are typically characterized by frequencies of a few Hz to a few tens of Hz, time-locking to acoustic envelope features. In this study we investigated a crossover case, cortically generated responses time-locked to continuous speech features at FFR-like rates. Using MEG, we analyzed responses in the high gamma range of 70-200 Hz to continuous speech using neural source-localized reverse correlation and the corresponding temporal response functions (TRFs). Continuous speech stimuli were presented to 40 subjects (17 younger, 23 older adults) with clinically normal hearing and their MEG responses were analyzed in the 70-200 Hz band. Consistent with the relative insensitivity of MEG to many subcortical structures, the spatiotemporal profile of these response components indicated a cortical origin with ∼40 ms peak latency and a right hemisphere bias. TRF analysis was performed using two separate aspects of the speech stimuli: a) the 70-200 Hz carrier of the speech, and b) the 70-200 Hz temporal modulations in the spectral envelope of the speech stimulus. The response was dominantly driven by the envelope modulation, with a much weaker contribution from the carrier. Age-related differences were also analyzed to investigate a reversal previously seen along the ascending auditory pathway, whereby older listeners show weaker midbrain FFR responses than younger listeners, but, paradoxically, have stronger cortical low frequency responses. In contrast to both these earlier results, this study did not find clear age-related differences in high gamma cortical responses to continuous speech. Cortical responses at FFR-like frequencies shared some properties with midbrain responses at the same frequencies and with cortical responses at much lower frequencies.


Asunto(s)
Envejecimiento/fisiología , Vías Auditivas/fisiología , Percepción Auditiva/fisiología , Percepción del Habla/fisiología , Adolescente , Adulto , Anciano , Corteza Auditiva/fisiología , Electroencefalografía/métodos , Potenciales Evocados Auditivos/fisiología , Femenino , Humanos , Magnetoencefalografía/métodos , Masculino , Persona de Mediana Edad , Habla , Adulto Joven
11.
Front Neurol ; 11: 613, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32719649

RESUMEN

Service members and veterans (SMVs) with a history of traumatic brain injury (TBI) or blast-related injury often report difficulties understanding speech in complex environments that are not captured by clinical tests of auditory function. Little is currently known about the relative contribution of other auditory, cognitive, and symptomological factors to these communication challenges. This study evaluated the influence of these factors on subjective and objective measures of hearing difficulties in SMVs with and without a history of TBI or blast exposure. Analyses included 212 U.S. SMVs who completed auditory and cognitive batteries and surveys of hearing and other symptoms as part of a larger longitudinal study of TBI. Objective speech recognition performance was predicted by TBI status, while subjective hearing complaints were predicted by blast exposure. Bothersome tinnitus was associated with a history of more severe TBI. Speech recognition performance deficits and tinnitus complaints were also associated with poorer cognitive function. Hearing complaints were predicted by high frequency hearing loss and reports of more severe PTSD symptoms. These results suggest that SMVs with a history of blast exposure and/or TBI experience communication deficits that go beyond what would be expected based on standard audiometric assessments of their injuries.

12.
Neuroimage ; 191: 116-126, 2019 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-30731247

RESUMEN

Human listeners can quickly and easily recognize different sound sources (objects and events) in their environment. Understanding how this impressive ability is accomplished can improve signal processing and machine intelligence applications along with assistive listening technologies. However, it is not clear how the brain represents the many sounds that humans can recognize (such as speech and music) at the level of individual sources, categories and acoustic features. To examine the cortical organization of these representations, we used patterns of fMRI responses to decode 1) four individual speakers and instruments from one another (separately, within each category), 2) the superordinate category labels associated with each stimulus (speech or instrument), and 3) a set of simple synthesized sounds that could be differentiated entirely on their acoustic features. Data were collected using an interleaved silent steady state sequence to increase the temporal signal-to-noise ratio, and mitigate issues with auditory stimulus presentation in fMRI. Largely separable clusters of voxels in the temporal lobes supported the decoding of individual speakers and instruments from other stimuli in the same category. Decoding the superordinate category of each sound was more accurate and involved a larger portion of the temporal lobes. However, these clusters all overlapped with areas that could decode simple, acoustically separable stimuli. Thus, individual sound sources from different sound categories are represented in separate regions of the temporal lobes that are situated within regions implicated in more general acoustic processes. These results bridge an important gap in our understanding of cortical representations of sounds and their acoustics.


Asunto(s)
Percepción Auditiva/fisiología , Encéfalo/fisiología , Música , Estimulación Acústica , Adulto , Femenino , Humanos , Masculino , Adulto Joven
13.
Trends Hear ; 22: 2331216518800869, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30261825

RESUMEN

Within the field of hearing science, pupillometry is a widely used method for quantifying listening effort. Its use in research is growing exponentially, and many labs are (considering) applying pupillometry for the first time. Hence, there is a growing need for a methods paper on pupillometry covering topics spanning from experiment logistics and timing to data cleaning and what parameters to analyze. This article contains the basic information and considerations needed to plan, set up, and interpret a pupillometry experiment, as well as commentary about how to interpret the response. Included are practicalities like minimal system requirements for recording a pupil response and specifications for peripheral, equipment, experiment logistics and constraints, and different kinds of data processing. Additional details include participant inclusion and exclusion criteria and some methodological considerations that might not be necessary in other auditory experiments. We discuss what data should be recorded and how to monitor the data quality during recording in order to minimize artifacts. Data processing and analysis are considered as well. Finally, we share insights from the collective experience of the authors and discuss some of the challenges that still lie ahead.


Asunto(s)
Atención/fisiología , Dilatación/métodos , Audición/fisiología , Guías de Práctica Clínica como Asunto , Pupila/fisiología , Percepción del Habla/fisiología , Audiometría de Tonos Puros/métodos , Femenino , Humanos , Masculino , Oftalmología/métodos , Tiempo de Reacción , Sensibilidad y Especificidad
14.
J Exp Child Psychol ; 161: 95-112, 2017 09.
Artículo en Inglés | MEDLINE | ID: mdl-28505505

RESUMEN

Stress and fatigue from effortful listening may compromise well-being, learning, and academic achievement in school-aged children. The aim of this study was to investigate the effect of a signal-to-noise ratio (SNR) typical of those in school classrooms on listening effort (behavioral and pupillometric) and listening-related fatigue (self-report and pupillometric) in a group of school-aged children. A sample of 41 normal-hearing children aged 8-11years performed a narrative speech-picture verification task in a condition with recommended levels of background noise ("ideal": +15dB SNR) and a condition with typical classroom background noise levels ("typical": -2dB SNR). Participants showed increased task-evoked pupil dilation in the typical listening condition compared with the ideal listening condition, consistent with an increase in listening effort. No differences were found between listening conditions in terms of performance accuracy and response time on the behavioral task. Similarly, no differences were found between listening conditions in self-report and pupillometric markers of listening-related fatigue. This is the first study to (a) examine listening-related fatigue in children using pupillometry and (b) demonstrate physiological evidence consistent with increased listening effort while listening to spoken narratives despite ceiling-level task performance accuracy. Understanding the physiological mechanisms that underpin listening-related effort and fatigue could inform intervention strategies and ultimately mitigate listening difficulties in children.


Asunto(s)
Percepción Auditiva/fisiología , Fatiga/fisiopatología , Pupila/fisiología , Percepción del Habla/fisiología , Niño , Femenino , Humanos , Masculino , Ruido , Tiempo de Reacción
15.
Psychophysiology ; 54(2): 193-203, 2017 02.
Artículo en Inglés | MEDLINE | ID: mdl-27731503

RESUMEN

Hearing loss is associated with anecdotal reports of fatigue during periods of sustained listening. However, few studies have attempted to measure changes in arousal, as a potential marker of fatigue, over the course of a sustained listening task. The present study aimed to examine subjective, behavioral, and physiological indices of listening-related fatigue. Twenty-four normal-hearing young adults performed a speech-picture verification task in different signal-to-noise ratios (SNRs) while their pupil size was monitored and response times recorded. Growth curve analysis revealed a significantly steeper linear decrease in pupil size in the more challenging SNR, but only in the second half of the trial block. Changes in pupil dynamics over the course of the more challenging listening condition block suggest a reduction in physiological arousal. Behavioral and self-report measures did not reveal any differences between listening conditions. This is the first study to show reduced physiological arousal during a sustained listening task, with changes over time consistent with the onset of fatigue.


Asunto(s)
Nivel de Alerta , Percepción Auditiva/fisiología , Fatiga/fisiopatología , Pupila/fisiología , Estimulación Acústica , Adolescente , Adulto , Humanos , Tiempo de Reacción , Relación Señal-Ruido , Adulto Joven
16.
Exp Aging Res ; 42(1): 67-82, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26683042

RESUMEN

BACKGROUND/STUDY CONTEXT: Adaptive control, reflected by elevated activity in cingulo-opercular brain regions, optimizes performance in challenging tasks by monitoring outcomes and adjusting behavior. For example, cingulo-opercular function benefits trial-level word recognition in noise for normal-hearing adults. Because auditory system deficits may limit the communicative benefit from adaptive control, we examined the extent to which cingulo-opercular engagement supports word recognition in noise for older adults with hearing loss (HL). METHODS: Participants were selected to form groups with Less HL (n = 12; mean pure tone threshold, pure tone average [PTA] = 19.2 ± 4.8 dB HL [hearing level]) and More HL (n = 12; PTA = 38.4 ± 4.5 dB HL, 0.25-8 kHz, both ears). A word recognition task was performed with words presented in multitalker babble at +3 or +10 dB signal-to-noise ratios (SNRs) during a sparse acquisition fMRI experiment. The participants were middle-aged and older (ages: 64.1 ± 8.4 years) English speakers with no history of neurological or psychiatric diagnoses. RESULTS: Elevated cingulo-opercular activity occurred with increased likelihood of correct word recognition on the next trial (t(23) = 3.28, p = .003), and this association did not differ between hearing loss groups. During trials with word recognition errors, the More HL group exhibited higher blood oxygen level-dependent (BOLD) contrast in occipital and parietal regions compared with the Less HL group. Across listeners, more pronounced cingulo-opercular activity during recognition errors was associated with better overall word recognition performance. CONCLUSION: The trial-level word recognition benefit from cingulo-opercular activity was equivalent for both hearing loss groups. When speech audibility and performance levels are similar for older adults with mild to moderate hearing loss, cingulo-opercular adaptive control contributes to word recognition in noise.


Asunto(s)
Corteza Cerebral/fisiopatología , Pérdida Auditiva/fisiopatología , Ruido , Percepción del Habla , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Índice de Severidad de la Enfermedad
17.
Exp Aging Res ; 42(1): 50-66, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26683041

RESUMEN

BACKGROUND/STUDY CONTEXT: Vigilance refers to the ability to sustain and adapt attentional focus in response to changing task demands. For older adults with hearing loss, vigilant listening may be particularly effortful and variable across individuals. This study examined the extent to which neural responses to sudden, unexpected changes in task structure (e.g., from rest to word recognition epochs) were related to pupillometry measures of listening effort. METHODS: Individual differences in the task-evoked pupil response during word recognition were used to predict functional magnetic resonance imaging (MRI) estimates of neural responses to salient transitions between quiet rest, noisy rest, and word recognition in unintelligible, fluctuating background noise. Participants included 29 older adults (M = 70.2 years old) with hearing loss (pure tone average across all frequencies = 36.1 dB HL [hearing level], SD = 6.7). RESULTS: Individuals with a greater average pupil response exhibited a more vigilant pattern of responding on a standardized continuous performance test (response time variability across varying interstimulus intervals r(27) = .38, p = .04). Across participants there was widespread engagement of attention- and sensory-related cortices in response to transitions between blocks of rest and word recognition conditions. Individuals who exhibited larger task-evoked pupil dilation also showed even greater activity in the right primary auditory cortex in response to changes in task structure. CONCLUSION: Pupillometric estimates of word recognition effort predicted variation in activity within cortical regions that were responsive to salient changes in the environment for older adults with hearing loss. The results of the current study suggest that vigilant attention is increased amongst older adults who exert greater listening effort.


Asunto(s)
Pérdida Auditiva/fisiopatología , Ruido , Percepción del Habla , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Masculino , Persona de Mediana Edad , Análisis y Desempeño de Tareas
18.
J Neurosci ; 35(9): 3929-37, 2015 Mar 04.
Artículo en Inglés | MEDLINE | ID: mdl-25740521

RESUMEN

Speech recognition in noise can be challenging for older adults and elicits elevated activity throughout a cingulo-opercular network that is hypothesized to monitor and modify behaviors to optimize performance. A word recognition in noise experiment was used to test the hypothesis that cingulo-opercular engagement provides performance benefit for older adults. Healthy older adults (N = 31; 50-81 years of age; mean pure tone thresholds <32 dB HL from 0.25 to 8 kHz, best ear; species: human) performed word recognition in multitalker babble at 2 signal-to-noise ratios (SNR = +3 or +10 dB) during a sparse sampling fMRI experiment. Elevated cingulo-opercular activity was associated with an increased likelihood of correct recognition on the following trial independently of SNR and performance on the preceding trial. The cingulo-opercular effect increased for participants with the best overall performance. These effects were lower for older adults compared with a younger, normal-hearing adult sample (N = 18). Visual cortex activity also predicted trial-level recognition for the older adults, which resulted from discrete decreases in activity before errors and occurred for the oldest adults with the poorest recognition. Participants demonstrating larger visual cortex effects also had reduced fractional anisotropy in an anterior portion of the left inferior frontal-occipital fasciculus, which projects between frontal and occipital regions where activity predicted word recognition. Together, the results indicate that older adults experience performance benefit from elevated cingulo-opercular activity, but not to the same extent as younger adults, and that declines in attentional control can limit word recognition.


Asunto(s)
Corteza Cerebral/fisiología , Ruido , Percepción del Habla/fisiología , Anciano , Anciano de 80 o más Años , Envejecimiento/psicología , Imagen de Difusión Tensora , Femenino , Humanos , Individualidad , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Reconocimiento en Psicología , Relación Señal-Ruido
19.
Psychophysiology ; 51(10): 1046-57, 2014 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-24909603

RESUMEN

The current pupillometry study examined the impact of speech-perception training on word recognition and cognitive effort in older adults with hearing loss. Trainees identified more words at the follow-up than at the baseline session. Training also resulted in an overall larger and faster peaking pupillary response, even when controlling for performance and reaction time. Perceptual and cognitive capacities affected the peak amplitude of the pupil response across participants but did not diminish the impact of training on the other pupil metrics. Thus, we demonstrated that pupillometry can be used to characterize training-related and individual differences in effort during a challenging listening task. Importantly, the results indicate that speech-perception training not only affects overall word recognition, but also a physiological metric of cognitive effort, which has the potential to be a biomarker of hearing loss intervention outcome.


Asunto(s)
Umbral Auditivo/fisiología , Pérdida Auditiva/fisiopatología , Audición/fisiología , Reconocimiento en Psicología/fisiología , Percepción del Habla/fisiología , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Pupila/fisiología , Tiempo de Reacción/fisiología , Habla
20.
J Neurosci ; 33(48): 18979-86, 2013 Nov 27.
Artículo en Inglés | MEDLINE | ID: mdl-24285902

RESUMEN

Recognizing speech in difficult listening conditions requires considerable focus of attention that is often demonstrated by elevated activity in putative attention systems, including the cingulo-opercular network. We tested the prediction that elevated cingulo-opercular activity provides word-recognition benefit on a subsequent trial. Eighteen healthy, normal-hearing adults (10 females; aged 20-38 years) performed word recognition (120 trials) in multi-talker babble at +3 and +10 dB signal-to-noise ratios during a sparse sampling functional magnetic resonance imaging (fMRI) experiment. Blood oxygen level-dependent (BOLD) contrast was elevated in the anterior cingulate cortex, anterior insula, and frontal operculum in response to poorer speech intelligibility and response errors. These brain regions exhibited significantly greater correlated activity during word recognition compared with rest, supporting the premise that word-recognition demands increased the coherence of cingulo-opercular network activity. Consistent with an adaptive control network explanation, general linear mixed model analyses demonstrated that increased magnitude and extent of cingulo-opercular network activity was significantly associated with correct word recognition on subsequent trials. These results indicate that elevated cingulo-opercular network activity is not simply a reflection of poor performance or error but also supports word recognition in difficult listening conditions.


Asunto(s)
Red Nerviosa/fisiología , Reconocimiento en Psicología/fisiología , Percepción del Habla/fisiología , Adulto , Mapeo Encefálico , Femenino , Lóbulo Frontal/fisiología , Giro del Cíngulo/fisiología , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Vías Nerviosas/fisiología , Oxígeno/sangre , Desempeño Psicomotor/fisiología , Relación Señal-Ruido , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...