Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
J Speech Lang Hear Res ; : 1-11, 2023 Jul 26.
Artigo em Inglês | MEDLINE | ID: mdl-37494929

RESUMO

PURPOSE: In a previous publication, we observed that maximum speech performance in a nonclinical sample of young adult speakers producing alternating diadochokinesis (DDK) sequences (e.g., rapidly repeating "pataka") was associated with cognitive control: Those with better cognitive switching abilities (i.e., switching flexibly between tasks or mental sets) showed higher DDK accuracy. To follow up on these results, we investigated whether this previously observed association is specific to the rapid production of alternating sequences or also holds for non-alternating sequences (e.g., "tatata"). METHOD: For the same sample of 78 young adults as in our previous study, we additionally analyzed their accuracy and rate performance on non-alternating sequences to investigate whether executive control abilities (i.e., indices of speakers' updating, inhibition, and switching abilities) were more strongly associated with production of alternating, as compared with non-alternating, sequences. RESULTS: Of the three executive control abilities, only switching predicted both DDK accuracy and rate. The association between cognitive switching (and updating ability) and DDK accuracy was only observed for alternating sequences. The DDK rate model included a simple effect of cognitive switching, such that those with better switching ability showed slower diadochokinetic rates across the board. Thus, those with better cognitive ability showed more accurate (alternating) diadochokinetic production and slower maximum rates for both alternating and non-alternating sequences. CONCLUSION: These combined results suggest that those with better executive control have better control over their maximum speech performance and show that the link between cognitive control and maximum speech performance also holds for non-alternating sequences.

2.
J Acoust Soc Am ; 153(4): 2165, 2023 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-37092911

RESUMO

Individual speakers are often able to modify their speech to facilitate communication in challenging conditions, such as speaking in a noisy environment. Such vocal "enrichments" might include reductions in speech rate or increases in acoustic contrasts. However, it is unclear how consistently speakers enrich their speech over time. This study examined inter-speaker variability in the speech enrichment modifications applied by speakers. The study compared a baseline habitual speaking style to a clear-Lombard style and measured changes in acoustic differences between the two styles over sentence trials. Seventy-eight young adult participants read out sentences in the habitual and clear-Lombard speaking styles. Acoustic differences between speaking styles generally increased nonlinearly over trials, suggesting that speakers require practice before realizing their full speech enrichment potential when speaking clearly in noise with reduced auditory feedback. Using a recent objective intelligibility metric based on glimpses, the study also found that predicted intelligibility increased over trials, highlighting that communicative benefits of the clear-Lombard style are not static. These findings underline the dynamic nature of speaking styles.


Assuntos
Percepção da Fala , Voz , Adulto Jovem , Humanos , Fala , Ruído , Acústica , Comunicação , Inteligibilidade da Fala , Acústica da Fala
3.
JASA Express Lett ; 3(3): 035201, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-37003708

RESUMO

The current study examined the relation between speaking-style categorization and speech recognition in post-lingually deafened adult cochlear implant users and normal-hearing listeners tested under 4- and 8-channel acoustic noise-vocoder cochlear implant simulations. Across all listeners, better speaking-style categorization of careful read and casual conversation speech was associated with more accurate recognition of speech across those same two speaking styles. Findings suggest that some cochlear implant users and normal-hearing listeners under cochlear implant simulation may benefit from stronger encoding of indexical information in speech, enabling both better categorization and recognition of speech produced in different speaking styles.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Fala , Ruído
5.
PLoS One ; 16(12): e0260952, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34965252

RESUMO

The endeavor to understand the human brain has seen more progress in the last few decades than in the previous two millennia. Still, our understanding of how the human brain relates to behavior in the real world and how this link is modulated by biological, social, and environmental factors is limited. To address this, we designed the Healthy Brain Study (HBS), an interdisciplinary, longitudinal, cohort study based on multidimensional, dynamic assessments in both the laboratory and the real world. Here, we describe the rationale and design of the currently ongoing HBS. The HBS is examining a population-based sample of 1,000 healthy participants (age 30-39) who are thoroughly studied across an entire year. Data are collected through cognitive, affective, behavioral, and physiological testing, neuroimaging, bio-sampling, questionnaires, ecological momentary assessment, and real-world assessments using wearable devices. These data will become an accessible resource for the scientific community enabling the next step in understanding the human brain and how it dynamically and individually operates in its bio-social context. An access procedure to the collected data and bio-samples is in place and published on https://www.healthybrainstudy.nl/en/data-and-methods/access. Trail registration: https://www.trialregister.nl/trial/7955.


Assuntos
Encéfalo/fisiologia , Meio Social , Adulto , Afeto/fisiologia , Comportamento , Encéfalo/diagnóstico por imagem , COVID-19/diagnóstico , Cognição/fisiologia , Feminino , Humanos , Masculino , Neuroimagem , Sensação/fisiologia , Inquéritos e Questionários
6.
J Speech Lang Hear Res ; 63(11): 3611-3627, 2020 11 13.
Artigo em Inglês | MEDLINE | ID: mdl-33079614

RESUMO

Purpose This study investigated whether maximum speech performance, more specifically, the ability to rapidly alternate between similar syllables during speech production, is associated with executive control abilities in a nonclinical young adult population. Method Seventy-eight young adult participants completed two speech tasks, both operationalized as maximum performance tasks, to index their articulatory control: a diadochokinetic (DDK) task with nonword and real-word syllable sequences and a tongue-twister task. Additionally, participants completed three cognitive tasks, each covering one element of executive control (a Flanker interference task to index inhibitory control, a letter-number switching task to index cognitive switching, and an operation span task to index updating of working memory). Linear mixed-effects models were fitted to investigate how well maximum speech performance measures can be predicted by elements of executive control. Results Participants' cognitive switching ability was associated with their accuracy in both the DDK and tongue-twister speech tasks. Additionally, nonword DDK accuracy was more strongly associated with executive control than real-word DDK accuracy (which has to be interpreted with caution). None of the executive control abilities related to the maximum rates at which participants performed the two speech tasks. Conclusion These results underscore the association between maximum speech performance and executive control (cognitive switching in particular).


Assuntos
Percepção da Fala , Fala , Função Executiva , Humanos , Memória de Curto Prazo , Medida da Produção da Fala , Adulto Jovem
7.
J Speech Lang Hear Res ; 63(9): 2833-2845, 2020 09 15.
Artigo em Inglês | MEDLINE | ID: mdl-32783579

RESUMO

Purpose In healthy speakers, the more frequent and probable a word is in its context, the shorter the word tends to be. This study investigated whether these probabilistic effects were similarly sized for speakers with dysarthria of different severities. Method Fifty-six speakers of New Zealand English (42 speakers with dysarthria and 14 healthy speakers) were recorded reading the Grandfather Passage. Measurements of word duration, frequency, and transitional word probability were taken. Results As hypothesized, words with a higher frequency and probability tended to be shorter in duration. There was also a significant interaction between word frequency and speech severity. This indicated that the more severe the dysarthria, the smaller the effects of word frequency on speakers' word durations. Transitional word probability also interacted with speech severity, but did not account for significant unique variance in the full model. Conclusions These results suggest that, as the severity of dysarthria increases, the duration of words is less affected by probabilistic variables. These findings may be due to reductions in the control and execution of muscle movement exhibited by speakers with dysarthria.


Assuntos
Disartria , Fala , Humanos , Nova Zelândia , Probabilidade , Acústica da Fala , Inteligibilidade da Fala , Medida da Produção da Fala
8.
J Acoust Soc Am ; 147(1): 101, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-32006976

RESUMO

The current study examined sentence recognition across speaking styles (conversational, neutral, and clear) in quiet and multi-talker babble (MTB) for cochlear implant (CI) users and normal-hearing listeners under CI simulations. Listeners demonstrated poorer recognition accuracy in MTB than in quiet, but were relatively more accurate with clear speech overall. Within CI users, higher-performing participants were also more accurate in MTB when listening to clear speech. Lower performing users' accuracy was not impacted by speaking style. Clear speech may facilitate recognition in MTB for high-performing users, who may be better able to take advantage of clear speech cues.


Assuntos
Implantes Cocleares , Reconhecimento Psicológico , Percepção da Fala , Fala , Estimulação Acústica , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Ruído , Adulto Jovem
9.
Ear Hear ; 40(1): 63-76, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-29742545

RESUMO

OBJECTIVES: Real-life, adverse listening conditions involve a great deal of speech variability, including variability in speaking style. Depending on the speaking context, talkers may use a more casual, reduced speaking style or a more formal, careful speaking style. Attending to fine-grained acoustic-phonetic details characterizing different speaking styles facilitates the perception of the speaking style used by the talker. These acoustic-phonetic cues are poorly encoded in cochlear implants (CIs), potentially rendering the discrimination of speaking style difficult. As a first step to characterizing CI perception of real-life speech forms, the present study investigated the perception of different speaking styles in normal-hearing (NH) listeners with and without CI simulation. DESIGN: The discrimination of three speaking styles (conversational reduced speech, speech from retold stories, and carefully read speech) was assessed using a speaking style discrimination task in two experiments. NH listeners classified sentence-length utterances, produced in one of the three styles, as either formal (careful) or informal (conversational). Utterances were presented with unmodified speaking rates in experiment 1 (31 NH, young adult Dutch speakers) and with modified speaking rates set to the average rate across all utterances in experiment 2 (28 NH, young adult Dutch speakers). In both experiments, acoustic noise-vocoder simulations of CIs were used to produce 12-channel (CI-12) and 4-channel (CI-4) vocoder simulation conditions, in addition to a no-simulation condition without CI simulation. RESULTS: In both experiments 1 and 2, NH listeners were able to reliably discriminate the speaking styles without CI simulation. However, this ability was reduced under CI simulation. In experiment 1, participants showed poor discrimination of speaking styles under CI simulation. Listeners used speaking rate as a cue to make their judgements, even though it was not a reliable cue to speaking style in the study materials. In experiment 2, without differences in speaking rate among speaking styles, listeners showed better discrimination of speaking styles under CI simulation, using additional cues to complete the task. CONCLUSIONS: The findings from the present study demonstrate that perceiving differences in three speaking styles under CI simulation is a difficult task because some important cues to speaking style are not fully available in these conditions. While some cues like speaking rate are available, this information alone may not always be a reliable indicator of a particular speaking style. Some other reliable speaking styles cues, such as degraded acoustic-phonetic information and variability in speaking rate within an utterance, may be available but less salient. However, as in experiment 2, listeners' perception of speaking styles may be modified if they are constrained or trained to use these additional cues, which were more reliable in the context of the present study. Taken together, these results suggest that dealing with speech variability in real-life listening conditions may be a challenge for CI users.


Assuntos
Implantes Cocleares , Sinais (Psicologia) , Acústica da Fala , Percepção da Fala/fisiologia , Adolescente , Adulto , Estudos de Casos e Controles , Feminino , Humanos , Masculino , Fonética , Fala , Testes de Discriminação da Fala , Adulto Jovem
10.
Lang Speech ; 60(2): 289-317, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28697699

RESUMO

High-frequency units are usually processed faster than low-frequency units in language comprehension and language production. Frequency effects have been shown for words as well as word combinations. Word co-occurrence effects can be operationalized in terms of transitional probability (TP). TPs reflect how probable a word is, conditioned by its right or left neighbouring word. This corpus study investigates whether three different age groups-younger children (8-12 years), adolescents (12-18 years) and older (62-95 years) Dutch speakers-show frequency and TP context effects on spoken word durations in reading aloud, and whether age groups differ in the size of these effects. Results show consistent effects of TP on word durations for all age groups. Thus, TP seems to influence the processing of words in context, beyond the well-established effect of word frequency, across the entire age range. However, the study also indicates that age groups differ in the size of TP effects, with older adults having smaller TP effects than adolescent readers. Our results show that probabilistic reduction effects in reading aloud may at least partly stem from contextual facilitation that leads to faster reading times in skilled readers, as well as in young language learners.


Assuntos
Linguística , Leitura , Acústica da Fala , Qualidade da Voz , Adolescente , Fatores Etários , Idoso , Idoso de 80 Anos ou mais , Criança , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Probabilidade , Medida da Produção da Fala , Fatores de Tempo
11.
Front Psychol ; 7: 781, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27303340

RESUMO

This study investigated whether age and/or differences in hearing sensitivity influence the perception of the emotion dimensions arousal (calm vs. aroused) and valence (positive vs. negative attitude) in conversational speech. To that end, this study specifically focused on the relationship between participants' ratings of short affective utterances and the utterances' acoustic parameters (pitch, intensity, and articulation rate) known to be associated with the emotion dimensions arousal and valence. Stimuli consisted of short utterances taken from a corpus of conversational speech. In two rating tasks, younger and older adults either rated arousal or valence using a 5-point scale. Mean intensity was found to be the main cue participants used in the arousal task (i.e., higher mean intensity cueing higher levels of arousal) while mean F 0 was the main cue in the valence task (i.e., higher mean F 0 being interpreted as more negative). Even though there were no overall age group differences in arousal or valence ratings, compared to younger adults, older adults responded less strongly to mean intensity differences cueing arousal and responded more strongly to differences in mean F 0 cueing valence. Individual hearing sensitivity among the older adults did not modify the use of mean intensity as an arousal cue. However, individual hearing sensitivity generally affected valence ratings and modified the use of mean F 0. We conclude that age differences in the interpretation of mean F 0 as a cue for valence are likely due to age-related hearing loss, whereas age differences in rating arousal do not seem to be driven by hearing sensitivity differences between age groups (as measured by pure-tone audiometry).

12.
J Acoust Soc Am ; 139(4): 1618, 2016 04.
Artigo em Inglês | MEDLINE | ID: mdl-27106310

RESUMO

This study investigates the effect of speech rate on spoken word recognition across the adult life span. Contrary to previous studies, conversational materials with a natural variation in speech rate were used rather than lab-recorded stimuli that are subsequently artificially time-compressed. It was investigated whether older adults' speech recognition is more adversely affected by increased speech rate compared to younger and middle-aged adults, and which individual listener characteristics (e.g., hearing, fluid cognitive processing ability) predict the size of the speech rate effect on recognition performance. In an eye-tracking experiment, participants indicated with a mouse-click which visually presented words they recognized in a conversational fragment. Click response times, gaze, and pupil size data were analyzed. As expected, click response times and gaze behavior were affected by speech rate, indicating that word recognition is more difficult if speech rate is faster. Contrary to earlier findings, increased speech rate affected the age groups to the same extent. Fluid cognitive processing ability predicted general recognition performance, but did not modulate the speech rate effect. These findings emphasize that earlier results of age by speech rate interactions mainly obtained with artificially speeded materials may not generalize to speech rate variation as encountered in conversational speech.


Assuntos
Envelhecimento/psicologia , Periodicidade , Acústica da Fala , Percepção da Fala , Adolescente , Adulto , Fatores Etários , Idoso , Audiometria de Tons Puros , Limiar Auditivo , Cognição , Movimentos Oculares , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Testes Neuropsicológicos , Psicoacústica , Tempo de Reação , Reconhecimento Psicológico , Inteligibilidade da Fala , Fatores de Tempo , Percepção Visual , Adulto Jovem
13.
Adv Exp Med Biol ; 894: 47-55, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27080645

RESUMO

Normal-hearing listeners use acoustic cues in speech to interpret a speaker's emotional state. This study investigates the effect of hearing aids on the perception of the emotion dimensions arousal (aroused/calm) and valence (positive/negative attitude) in older adults with hearing loss. More specifically, we investigate whether wearing a hearing aid improves the correlation between affect ratings and affect-related acoustic parameters. To that end, affect ratings by 23 hearing-aid users were compared for aided and unaided listening. Moreover, these ratings were compared to the ratings by an age-matched group of 22 participants with age-normal hearing.For arousal, hearing-aid users rated utterances as generally more aroused in the aided than in the unaided condition. Intensity differences were the strongest indictor of degree of arousal. Among the hearing-aid users, those with poorer hearing used additional prosodic cues (i.e., tempo and pitch) for their arousal ratings, compared to those with relatively good hearing. For valence, pitch was the only acoustic cue that was associated with valence. Neither listening condition nor hearing loss severity (differences among the hearing-aid users) influenced affect ratings or the use of affect-related acoustic parameters. Compared to the normal-hearing reference group, ratings of hearing-aid users in the aided condition did not generally differ in both emotion dimensions. However, hearing-aid users were more sensitive to intensity differences in their arousal ratings than the normal-hearing participants.We conclude that the use of hearing aids is important for the rehabilitation of affect perception and particularly influences the interpretation of arousal.


Assuntos
Afeto , Nível de Alerta , Percepção Auditiva , Auxiliares de Audição , Idoso , Idoso de 80 Anos ou mais , Limiar Auditivo , Feminino , Humanos , Masculino
14.
Front Psychol ; 7: 186, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26952145

RESUMO

The acceptable noise level (ANL) test, in which individuals indicate what level of noise they are willing to put up with while following speech, has been used to guide hearing aid fitting decisions and has been found to relate to prospective hearing aid use. Unlike objective measures of speech perception ability, ANL outcome is not related to individual hearing loss or age, but rather reflects an individual's inherent acceptance of competing noise while listening to speech. As such, the measure may predict aspects of hearing aid success. Crucially, however, recent studies have questioned its repeatability (test-retest reliability). The first question for this study was whether the inconsistent results regarding the repeatability of the ANL test may be due to differences in speech material types used in previous studies. Second, it is unclear whether meaningfulness and semantic coherence of the speech modify ANL outcome. To investigate these questions, we compared ANLs obtained with three types of materials: the International Speech Test Signal (ISTS), which is non-meaningful and semantically non-coherent by definition, passages consisting of concatenated meaningful standard audiology sentences, and longer fragments taken from conversational speech. We included conversational speech as this type of speech material is most representative of everyday listening. Additionally, we investigated whether ANL outcomes, obtained with these three different speech materials, were associated with self-reported limitations due to hearing problems and listening effort in everyday life, as assessed by a questionnaire. ANL data were collected for 57 relatively good-hearing adult participants with an age range representative for hearing aid users. Results showed that meaningfulness, but not semantic coherence of the speech material affected ANL. Less noise was accepted for the non-meaningful ISTS signal than for the meaningful speech materials. ANL repeatability was comparable across the speech materials. Furthermore, ANL was found to be associated with the outcome of a hearing-related questionnaire. This suggests that ANL may predict activity limitations for listening to speech-in-noise in everyday situations. In conclusion, more natural speech materials can be used in a clinical setting as their repeatability is not reduced compared to more standard materials.

15.
J Acoust Soc Am ; 138(3): 1408-17, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-26428779

RESUMO

This study examined the use of fricative noise information and coarticulatory cues for categorization of word-final fricatives [s] and [f] by younger and older Dutch listeners alike. Particularly, the effect of information loss in the higher frequencies on the use of these two cues for fricative categorization was investigated. If information in the higher frequencies is less strongly available, fricative identification may be impaired or listeners may learn to focus more on coarticulatory information. The present study investigates this second possibility. Phonetic categorization results showed that both younger and older Dutch listeners use the primary cue fricative noise and the secondary cue coarticulatory information to distinguish word-final [f] from [s]. Individual hearing sensitivity in the older listeners modified the use of fricative noise information, but did not modify the use of coarticulatory information. When high-frequency information was filtered out from the speech signal, fricative noise could no longer be used by the younger and older adults. Crucially, they also did not learn to rely more on coarticulatory information as a compensatory cue for fricative categorization. This suggests that listeners do not readily show compensatory use of this secondary cue to fricative identity when fricative categorization becomes difficult.


Assuntos
Sinais (Psicologia) , Perda Auditiva/fisiopatologia , Fonética , Fala/fisiologia , Estimulação Acústica , Fatores Etários , Idoso , Feminino , Humanos , Masculino , Modelos Biológicos , Países Baixos , Espectrografia do Som , Adulto Jovem
16.
Atten Percept Psychophys ; 77(2): 493-507, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-25373441

RESUMO

This study investigates two variables that may modify lexically guided perceptual learning: individual hearing sensitivity and attentional abilities. Older Dutch listeners (aged 60+ years, varying from good hearing to mild-to-moderate high-frequency hearing loss) were tested on a lexically guided perceptual learning task using the contrast [f]-[s]. This contrast mainly differentiates between the two consonants in the higher frequencies, and thus is supposedly challenging for listeners with hearing loss. The analyses showed that older listeners generally engage in lexically guided perceptual learning. Hearing loss and selective attention did not modify perceptual learning in our participant sample, while attention-switching control did: listeners with poorer attention-switching control showed a stronger perceptual learning effect. We postulate that listeners with better attention-switching control may, in general, rely more strongly on bottom-up acoustic information compared to listeners with poorer attention-switching control, making them in turn less susceptible to lexically guided perceptual learning. Our results, moreover, clearly show that lexically guided perceptual learning is not lost when acoustic processing is less accurate.


Assuntos
Atenção , Audição , Aprendizagem , Percepção da Fala , Estimulação Acústica , Idoso , Feminino , Perda Auditiva de Alta Frequência/psicologia , Humanos , Masculino , Pessoa de Meia-Idade
17.
Front Hum Neurosci ; 8: 628, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25225475

RESUMO

Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with 60 meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory, and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly.

18.
Front Psychol ; 5: 772, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25101034

RESUMO

This study examined the contributions of verbal ability and executive control to verbal fluency performance in older adults (n = 82). Verbal fluency was assessed in letter and category fluency tasks, and performance on these tasks was related to indicators of vocabulary size, lexical access speed, updating, and inhibition ability. In regression analyses the number of words produced in both fluency tasks was predicted by updating ability, and the speed of the first response was predicted by vocabulary size and, for category fluency only, lexical access speed. These results highlight the hybrid character of both fluency tasks, which may limit their usefulness for research and clinical purposes.

19.
Q J Exp Psychol (Hove) ; 67(9): 1842-62, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24443921

RESUMO

Many older listeners report difficulties in understanding speech in noisy situations. Working memory and other cognitive skills may modulate older listeners' ability to use context information to alleviate the effects of noise on spoken-word recognition. In the present study, we investigated whether verbal working memory predicts older adults' ability to immediately use context information in the recognition of words embedded in sentences, presented in different listening conditions. In a phoneme-monitoring task, older adults were asked to detect as fast and as accurately as possible target phonemes in sentences spoken by a target speaker. Target speech was presented without noise, with fluctuating speech-shaped noise, or with competing speech from a single distractor speaker. The gradient measure of contextual probability (derived from a separate offline rating study) affected the speed of recognition. Contextual facilitation was modulated by older listeners' verbal working memory (measured with a backward digit span task) and age across listening conditions. Working memory and age, as well as hearing loss, were also the most consistent predictors of overall listening performance. Older listeners' immediate benefit from context in spoken-word recognition thus relates to their ability to keep and update a semantic representation of the sentence content in working memory.


Assuntos
Memória de Curto Prazo/fisiologia , Reconhecimento Psicológico/fisiologia , Aprendizagem Verbal , Vocabulário , Estimulação Acústica , Idoso , Idoso de 80 Anos ou mais , Envelhecimento , Feminino , Humanos , Individualidade , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Tempo de Reação
20.
Atten Percept Psychophys ; 75(3): 525-36, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-23354594

RESUMO

Numerous studies have shown that younger adults engage in lexically guided perceptual learning in speech perception. Here, we investigated whether older listeners are also able to retune their phonetic category boundaries. More specifically, in this research we tried to answer two questions. First, do older adults show perceptual-learning effects of similar size to those of younger adults? Second, do differences in lexical behavior predict the strength of the perceptual-learning effect? An age group comparison revealed that older listeners do engage in lexically guided perceptual learning, but there were two age-related differences: Younger listeners had a stronger learning effect right after exposure than did older listeners, but the effect was more stable for older than for younger listeners. Moreover, a clear link was shown to exist between individuals' lexical-decision performance during exposure and the magnitude of their perceptual-learning effects. A subsequent analysis on the results of the older participants revealed that, even within the older participant group, with increasing age the perceptual retuning effect became smaller but also more stable, mirroring the age group comparison results. These results could not be explained by differences in hearing loss. The age effect may be accounted for by decreased flexibility in the adjustment of phoneme categories or by age-related changes in the dynamics of spoken-word recognition, with older adults being more affected by competition from similar-sounding lexical competitors, resulting in less lexical guidance for perceptual retuning. In conclusion, our results clearly show that the speech perception system remains flexible over the life span.


Assuntos
Envelhecimento/fisiologia , Aprendizagem/fisiologia , Fonética , Percepção da Fala/fisiologia , Adulto , Idoso , Idoso de 80 Anos ou mais , Testes com Listas de Dissílabos , Feminino , Humanos , Individualidade , Masculino , Pessoa de Meia-Idade , Semântica , Fala/classificação , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA