Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Int J Audiol ; : 1-10, 2022 Nov 25.
Artigo em Inglês | MEDLINE | ID: mdl-36427054

RESUMO

OBJECTIVE: The aim of the current study was to assess the sensitivity, reliability and convergent validity of objective measures of listening effort collected in a sequential dual-task. DESIGN: On each trial, participants viewed a set of digits and listened to a spoken sentence presented at one of a range of signal-to-noise ratios (SNR) and then typed the sentence-final word and recalled the digits. Listening effort measures included word response time, digit recall accuracy and digit response time. In Experiment 1, SNR on each trial was randomised. In Experiment 2, SNR varied in a blocked design, and in each block self-reported listening effort was also collected. STUDY SAMPLES: Separate groups of 40 young adults participated in each experiment. RESULTS: Effects of SNR were observed for all measures. Linear effects of SNR were generally observed even with word recognition accuracy factored out of the models. Among the objective measures, reliability was excellent, and repeated-measures correlations, though not between-subjects correlations, were nearly all significant. CONCLUSION: The objective measures assessed appear to be sensitive and reliable indices of listening effort that are non-redundant with speech intelligibility and have strong within-participants convergent validity. Results support use of these measures in future studies of listening effort.

2.
Front Neurosci ; 16: 915349, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35720726

RESUMO

Objectives: Listening effort engages cognitive resources to support speech understanding in adverse listening conditions, and leads to fatigue over the longer term for people with hearing loss. Direct, neural measures of listening-related fatigue have not been developed. Here, event-related or phasic changes in alpha and theta oscillatory power during listening were used as measures of listening effort, and longer-term or tonic changes over the course of the listening task were assessed as measures of listening-related fatigue. In addition, influences of self-reported fatigue and degree of hearing loss on tonic changes in oscillatory power were examined. Design: Participants were middle-aged adults (age 37-65 years; n = 12) with age-appropriate hearing. Sentences were presented in a background of multi-talker babble at a range of signal-to-noise ratios (SNRs) varying around the 80 percent threshold of individual listeners. Single-trial oscillatory power during both sentence and baseline intervals was analyzed with linear mixed-effect models that included as predictors trial number, SNR, subjective fatigue, and hearing loss. Results: Alpha and theta power in both sentence presentation and baseline intervals increased as a function of trial, indicating listening-related fatigue. Further, tonic power increases across trials were affected by hearing loss and/or subjective fatigue, particularly in the alpha-band. Phasic changes in alpha and theta power generally tracked with SNR, with decreased alpha power and increased theta power at less favorable SNRs. However, for the alpha-band, the linear effect of SNR emerged only at later trials. Conclusion: Tonic increases in oscillatory power in alpha- and theta-bands over the course of a listening task may be biomarkers for the development of listening-related fatigue. In addition, alpha-band power as an index of listening-related fatigue may be sensitive to individual differences attributable to level of hearing loss and the subjective experience of listening-related fatigue. Finally, phasic effects of SNR on alpha power emerged only after a period of listening, suggesting that this measure of listening effort could depend on the development of listening-related fatigue.

3.
Ear Hear ; 43(4): 1164-1177, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34983897

RESUMO

OBJECTIVES: Listening effort is needed to understand speech that is degraded by hearing loss, a noisy environment, or both. This in turn reduces cognitive spare capacity, the amount of cognitive resources available for allocation to concurrent tasks. Predictive sentence context enables older listeners to perceive speech more accurately, but how does contextual information affect older adults' listening effort? The current study examines the impacts of sentence context and cognitive (memory) load on sequential dual-task behavioral performance in older adults. To assess whether effects of context and memory load differ as a function of older listeners' hearing status, baseline working memory capacity, or both, effects were compared across separate groups of participants with and without hearing loss and with high and low working memory capacity. DESIGN: Participants were older adults (age 60-84 years; n = 63) who passed a screen for cognitive impairment. A median split classified participants into groups with high and low working memory capacity. On each trial, participants listened to spoken sentences in noise and reported sentence-final words that were either predictable or unpredictable based on sentence context, and also recalled short (low-load) or long (high-load) sequences of digits that were presented visually before each spoken sentence. Speech intelligibility was quantified as word identification accuracy, and measures of listening effort included digit recall accuracy, and response time to words and digits. Correlations of context benefit in each dependent measure with working memory and vocabulary were also examined. RESULTS: Across all participant groups, accuracy and response time for both word identification and digit recall were facilitated by predictive context, indicating that in addition to an improvement in intelligibility, listening effort was also reduced when sentence-final words were predictable. Effects of predictability on all listening effort measures were observed whether or not trials with an incorrect word identification response were excluded, indicating that the effects of predictability on listening effort did not depend on speech intelligibility. In addition, although cognitive load did not affect word identification accuracy, response time for word identification and digit recall, as well as accuracy for digit recall, were impaired under the high-load condition, indicating that cognitive load reduced the amount of cognitive resources available for speech processing. Context benefit in speech intelligibility was positively correlated with vocabulary. However, context benefit was not related to working memory capacity. CONCLUSIONS: Predictive sentence context reduces listening effort in cognitively healthy older adults resulting in greater cognitive spare capacity available for other mental tasks, irrespective of the presence or absence of hearing loss and baseline working memory capacity.


Assuntos
Surdez , Perda Auditiva , Percepção da Fala , Idoso , Idoso de 80 Anos ou mais , Humanos , Esforço de Escuta , Memória de Curto Prazo/fisiologia , Pessoa de Meia-Idade , Inteligibilidade da Fala , Percepção da Fala/fisiologia
4.
Trends Hear ; 25: 23312165211018092, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34674579

RESUMO

A sequential dual-task design was used to assess the impacts of spoken sentence context and cognitive load on listening effort. Young adults with normal hearing listened to sentences masked by multitalker babble in which sentence-final words were either predictable or unpredictable. Each trial began with visual presentation of a short (low-load) or long (high-load) sequence of to-be-remembered digits. Words were identified more quickly and accurately in predictable than unpredictable sentence contexts. In addition, digits were recalled more quickly and accurately on trials on which the sentence was predictable, indicating reduced listening effort for predictable compared to unpredictable sentences. For word and digit recall response time but not for digit recall accuracy, the effect of predictability remained significant after exclusion of trials with incorrect word responses and was thus independent of speech intelligibility. In addition, under high cognitive load, words were identified more slowly and digits were recalled more slowly and less accurately than under low load. Participants' working memory and vocabulary were not correlated with the sentence context benefit in either word recognition or digit recall. Results indicate that listening effort is reduced when sentences are predictable and that cognitive load affects the processing of spoken words in sentence contexts.


Assuntos
Esforço de Escuta , Percepção da Fala , Cognição/fisiologia , Humanos , Tempo de Reação , Percepção da Fala/fisiologia , Adulto Jovem
5.
Ear Hear ; 41(5): 1144-1157, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32282402

RESUMO

OBJECTIVES: Listening to speech in adverse listening conditions is effortful. Objective assessment of cognitive spare capacity during listening can serve as an index of the effort needed to understand speech. Cognitive spare capacity is influenced both by signal-driven demands posed by listening conditions and top-down demands intrinsic to spoken language processing, such as memory use and semantic processing. Previous research indicates that electrophysiological responses, particularly alpha oscillatory power, may index listening effort. However, it is not known how these indices respond to memory and semantic processing demands during spoken language processing in adverse listening conditions. The aim of the present study was twofold: first, to assess the impact of memory demands on electrophysiological responses during recognition of degraded, spoken sentences, and second, to examine whether predictable sentence contexts increase or decrease cognitive spare capacity during listening. DESIGN: Cognitive demand was varied in a memory load task in which young adult participants (n = 20) viewed either low-load (one digit) or high-load (seven digits) sequences of digits, then listened to noise-vocoded spoken sentences that were either predictable or unpredictable, and then reported the final word of the sentence and the digits. Alpha oscillations in the frequency domain and event-related potentials in the time domain of the electrophysiological data were analyzed, as was behavioral accuracy for both words and digits. RESULTS: Measured during sentence processing, event-related desynchronization of alpha power was greater (more negative) under high load than low load and was also greater for unpredictable than predictable sentences. A complementary pattern was observed for the P300/late positive complex (LPC) to sentence-final words, such that P300/LPC amplitude was reduced under high load compared with low load and for unpredictable compared with predictable sentences. Both words and digits were identified more quickly and accurately on trials in which spoken sentences were predictable. CONCLUSIONS: Results indicate that during a sentence-recognition task, both cognitive load and sentence predictability modulate electrophysiological indices of cognitive spare capacity, namely alpha oscillatory power and P300/LPC amplitude. Both electrophysiological and behavioral results indicate that a predictive sentence context reduces cognitive demands during listening. Findings contribute to a growing literature on objective measures of cognitive demand during listening and indicate predictable sentence context as a top-down factor that can support ease of listening.


Assuntos
Percepção da Fala , Cognição , Eletroencefalografia , Humanos , Idioma , Ruído
6.
Ear Hear ; 39(2): 378-389, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-28945658

RESUMO

OBJECTIVES: Listening effort (LE) induced by speech degradation reduces performance on concurrent cognitive tasks. However, a converse effect of extrinsic cognitive load on recognition of spoken words in sentences has not been shown. The aims of the present study were to (a) examine the impact of extrinsic cognitive load on spoken word recognition in a sentence recognition task and (b) determine whether cognitive load and/or LE needed to understand spectrally degraded speech would differentially affect word recognition in high- and low-predictability sentences. Downstream effects of speech degradation and sentence predictability on the cognitive load task were also examined. DESIGN: One hundred twenty young adults identified sentence-final spoken words in high- and low-predictability Speech Perception in Noise sentences. Cognitive load consisted of a preload of short (low-load) or long (high-load) sequences of digits, presented visually before each spoken sentence and reported either before or after identification of the sentence-final word. LE was varied by spectrally degrading sentences with four-, six-, or eight-channel noise vocoding. Level of spectral degradation and order of report (digits first or words first) were between-participants variables. Effects of cognitive load, sentence predictability, and speech degradation on accuracy of sentence-final word identification as well as recall of preload digit sequences were examined. RESULTS: In addition to anticipated main effects of sentence predictability and spectral degradation on word recognition, we found an effect of cognitive load, such that words were identified more accurately under low load than high load. However, load differentially affected word identification in high- and low-predictability sentences depending on the level of sentence degradation. Under severe spectral degradation (four-channel vocoding), the effect of cognitive load on word identification was present for high-predictability sentences but not for low-predictability sentences. Under mild spectral degradation (eight-channel vocoding), the effect of load was present for low-predictability sentences but not for high-predictability sentences. There were also reliable downstream effects of speech degradation and sentence predictability on recall of the preload digit sequences. Long digit sequences were more easily recalled following spoken sentences that were less spectrally degraded. When digits were reported after identification of sentence-final words, short digit sequences were recalled more accurately when the spoken sentences were predictable. CONCLUSIONS: Extrinsic cognitive load can impair recognition of spectrally degraded spoken words in a sentence recognition task. Cognitive load affected word identification in both high- and low-predictability sentences, suggesting that load may impact both context use and lower-level perceptual processes. Consistent with prior work, LE also had downstream effects on memory for visual digit sequences. Results support the proposal that extrinsic cognitive load and LE induced by signal degradation both draw on a central, limited pool of cognitive resources that is used to recognize spoken words in sentences under adverse listening conditions.


Assuntos
Cognição , Percepção da Fala , Voluntários Saudáveis , Audição/fisiologia , Humanos , Idioma , Ruído , Adulto Jovem
7.
J Speech Lang Hear Res ; 60(8): 2321-2336, 2017 08 16.
Artigo em Inglês | MEDLINE | ID: mdl-28724130

RESUMO

Purpose: We sought to determine whether speech perception and language skills measured early after cochlear implantation in children who are deaf, and early postimplant growth in speech perception and language skills, predict long-term speech perception, language, and neurocognitive outcomes. Method: Thirty-six long-term users of cochlear implants, implanted at an average age of 3.4 years, completed measures of speech perception, language, and executive functioning an average of 14.4 years postimplantation. Speech perception and language skills measured in the 1st and 2nd years postimplantation and open-set word recognition measured in the 3rd and 4th years postimplantation were obtained from a research database in order to assess predictive relations with long-term outcomes. Results: Speech perception and language skills at 6 and 18 months postimplantation were correlated with long-term outcomes for language, verbal working memory, and parent-reported executive functioning. Open-set word recognition was correlated with early speech perception and language skills and long-term speech perception and language outcomes. Hierarchical regressions showed that early speech perception and language skills at 6 months postimplantation and growth in these skills from 6 to 18 months both accounted for substantial variance in long-term outcomes for language and verbal working memory that was not explained by conventional demographic and hearing factors. Conclusion: Speech perception and language skills measured very early postimplantation, and early postimplant growth in speech perception and language, may be clinically relevant markers of long-term language and neurocognitive outcomes in users of cochlear implants. Supplemental materials: https://doi.org/10.23641/asha.5216200.


Assuntos
Desenvolvimento Infantil , Implantes Cocleares , Surdez/psicologia , Surdez/reabilitação , Idioma , Percepção da Fala , Adolescente , Adulto , Criança , Pré-Escolar , Surdez/diagnóstico , Surdez/cirurgia , Feminino , Humanos , Lactente , Masculino , Prognóstico , Adulto Jovem
8.
Neuropsychologia ; 91: 451-464, 2016 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-27616158

RESUMO

Adult aging is associated with decreased accuracy for recognizing speech, particularly in noisy backgrounds and for high neighborhood density words, which sound similar to many other words. In the current study, the time course of neighborhood density effects in young and older adults was compared using event-related potentials (ERP) and behavioral responses in a lexical decision task for spoken words and nonwords presented either in quiet or in noise. Target items sounded similar either to many or to few other words (neighborhood density) but were balanced for the frequency of their component sounds (phonotactic probability). Behavioral effects of density were similar across age groups, but the event-related potential effects of density differed as a function of age group. For young adults, density modulated the amplitude of both the N400 and the later P300 or late positive complex (LPC). For older adults, density modulated only the amplitude of the P300/LPC. Thus, spreading activation to the semantics of lexical neighbors, indexed by the N400 density effect, appears to be reduced or delayed in adult aging. In contrast, effects of density on P300/LPC amplitude were present in both age groups, perhaps reflecting attentional allocation to items that resemble few words in the mental lexicon. The results constitute the first evidence that ERP effects of neighborhood density are affected by adult aging. The age difference may reflect either a unitary density effect that is delayed by approximately 150ms in older adults, or multiple processes that are differentially affected by aging.


Assuntos
Envelhecimento/fisiologia , Potenciais Evocados/fisiologia , Fonética , Semântica , Percepção da Fala/fisiologia , Vocabulário , Fatores Etários , Idoso , Análise de Variância , Tomada de Decisões/fisiologia , Eletroencefalografia , Feminino , Humanos , Masculino , Tempo de Reação/fisiologia , Reconhecimento Psicológico/fisiologia , Fatores de Tempo , Adulto Jovem
9.
Brain Lang ; 127(3): 463-74, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24129200

RESUMO

All current models of spoken word recognition propose that sound-based representations of spoken words compete with, or inhibit, one another during recognition. In addition, certain models propose that higher probability sublexical units facilitate recognition under certain circumstances. Two experiments were conducted examining ERPs to spoken words and nonwords simultaneously varying in phonotactic probability and neighborhood density. Results showed that the amplitude of the P2 potential was greater for high probability-density words and nonwords, suggesting an early inhibitory effect of neighborhood density. In order to closely examine the role of phonotactic probability, effects of initial phoneme frequency were also examined. The latency of the P2 potential was shorter for words with high initial-consonant probability, suggesting a facilitative effect of phonotactic probability. The current results are consistent with findings from previous studies using reaction time and eye-tracking paradigms and provide new insights into the time-course of lexical and sublexical activation and competition.


Assuntos
Encéfalo/fisiologia , Potenciais Evocados/fisiologia , Fonética , Percepção da Fala/fisiologia , Eletroencefalografia , Feminino , Humanos , Masculino , Tempo de Reação/fisiologia , Processamento de Sinais Assistido por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...