Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 47
Filtrar
1.
J Neurosci ; 42(3): 435-442, 2022 01 19.
Artigo em Inglês | MEDLINE | ID: mdl-34815317

RESUMO

In everyday conversation, we usually process the talker's face as well as the sound of the talker's voice. Access to visual speech information is particularly useful when the auditory signal is degraded. Here, we used fMRI to monitor brain activity while adult humans (n = 60) were presented with visual-only, auditory-only, and audiovisual words. The audiovisual words were presented in quiet and in several signal-to-noise ratios. As expected, audiovisual speech perception recruited both auditory and visual cortex, with some evidence for increased recruitment of premotor cortex in some conditions (including in substantial background noise). We then investigated neural connectivity using psychophysiological interaction analysis with seed regions in both primary auditory cortex and primary visual cortex. Connectivity between auditory and visual cortices was stronger in audiovisual conditions than in unimodal conditions, including a wide network of regions in posterior temporal cortex and prefrontal cortex. In addition to whole-brain analyses, we also conducted a region-of-interest analysis on the left posterior superior temporal sulcus (pSTS), implicated in many previous studies of audiovisual speech perception. We found evidence for both activity and effective connectivity in pSTS for visual-only and audiovisual speech, although these were not significant in whole-brain analyses. Together, our results suggest a prominent role for cross-region synchronization in understanding both visual-only and audiovisual speech that complements activity in integrative brain regions like pSTS.SIGNIFICANCE STATEMENT In everyday conversation, we usually process the talker's face as well as the sound of the talker's voice. Access to visual speech information is particularly useful when the auditory signal is hard to understand (e.g., background noise). Prior work has suggested that specialized regions of the brain may play a critical role in integrating information from visual and auditory speech. Here, we show a complementary mechanism relying on synchronized brain activity among sensory and motor regions may also play a critical role. These findings encourage reconceptualizing audiovisual integration in the context of coordinated network activity.


Assuntos
Córtex Auditivo/fisiologia , Idioma , Leitura Labial , Rede Nervosa/fisiologia , Percepção da Fala/fisiologia , Córtex Visual/fisiologia , Percepção Visual/fisiologia , Adulto , Idoso , Idoso de 80 Anos ou mais , Córtex Auditivo/diagnóstico por imagem , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Rede Nervosa/diagnóstico por imagem , Córtex Visual/diagnóstico por imagem , Adulto Jovem
2.
J Acoust Soc Am ; 152(6): 3216, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36586857

RESUMO

Although it is clear that sighted listeners use both auditory and visual cues during speech perception, the manner in which multisensory information is combined is a matter of debate. One approach to measuring multisensory integration is to use variants of the McGurk illusion, in which discrepant auditory and visual cues produce auditory percepts that differ from those based on unimodal input. Not all listeners show the same degree of susceptibility to the McGurk illusion, and these individual differences are frequently used as a measure of audiovisual integration ability. However, despite their popularity, we join the voices of others in the field to argue that McGurk tasks are ill-suited for studying real-life multisensory speech perception: McGurk stimuli are often based on isolated syllables (which are rare in conversations) and necessarily rely on audiovisual incongruence that does not occur naturally. Furthermore, recent data show that susceptibility to McGurk tasks does not correlate with performance during natural audiovisual speech perception. Although the McGurk effect is a fascinating illusion, truly understanding the combined use of auditory and visual information during speech perception requires tasks that more closely resemble everyday communication: namely, words, sentences, and narratives with congruent auditory and visual speech cues.


Assuntos
Ilusões , Percepção da Fala , Humanos , Percepção Visual , Idioma , Fala , Percepção Auditiva , Estimulação Luminosa , Estimulação Acústica
3.
Ear Hear ; 41(3): 549-560, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31453875

RESUMO

OBJECTIVES: This study was designed to examine how speaking rate affects auditory-only, visual-only, and auditory-visual speech perception across the adult lifespan. In addition, the study examined the extent to which unimodal (auditory-only and visual-only) performance predicts auditory-visual performance across a range of speaking rates. The authors hypothesized significant Age × Rate interactions in all three modalities and that unimodal performance would account for a majority of the variance in auditory-visual speech perception for speaking rates that are both slower and faster than normal. DESIGN: Participants (N = 145), ranging in age from 22 to 92, were tested in conditions with auditory-only, visual-only, and auditory-visual presentations using a closed-set speech perception test. Five different speaking rates were presented in each modality: an unmodified (normal rate), two rates that were slower than normal, and two rates that were faster than normal. Signal to noise ratios were set individually to produce approximately 30% correct identification in the auditory-only condition and this signal to noise ratio was used in the auditory-only and auditory-visual conditions. RESULTS: Age × Rate interactions were observed for the fastest speaking rates in both the visual-only and auditory-visual conditions. Unimodal performance accounted for at least 60% of the variance in auditory-visual performance for all five speaking rates. CONCLUSIONS: The findings demonstrate that the disproportionate difficulty that older adults have with rapid speech for auditory-only presentations can also be observed with visual-only and auditory-visual presentations. Taken together, the present analyses of age and individual differences indicate a generalized age-related decline in the ability to understand speech produced at fast speaking rates. The finding that auditory-visual speech performance was almost entirely predicted by unimodal performance across all five speaking rates has important clinical implications for auditory-visual speech perception and the ability of older adults to use visual speech information to compensate for age-related hearing loss.


Assuntos
Percepção da Fala , Estimulação Acústica , Idoso , Percepção Auditiva , Humanos , Fala , Percepção Visual
4.
Mem Cognit ; 48(8): 1403-1416, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32671592

RESUMO

A number of recent studies have shown that older adults are more susceptible to context-based misperceptions in hearing (Rogers, Jacoby, & Sommers, Psychology and Aging, 27, 33-45, 2012; Sommers, Morton, & Rogers, Remembering: Attributions, Processes, and Control in Human Memory [Essays in Honor of Larry Jacoby], pp. 269-284, 2015) than are young adults. One explanation for these age-related increases in what we term false hearing is that older adults are less able than young individuals to inhibit a prepotent response favored by context. A similar explanation has been proposed for demonstrations of age-related increases in false memory (Jacoby, Bishara, Hessels, & Toth, Journal of Experimental Psychology: General, 134, 131-148, 2005). The present study was designed to compare susceptibility to false hearing and false memory in a group of young and older adults. In Experiment 1, we replicated the findings of past studies demonstrating increased frequency of false hearing in older, relative to young, adults. In Experiment 2, we demonstrated older adults' increased susceptibility to false memory in the same sample. Importantly, we found that participants who were more prone to false hearing also tended to be more prone to false memory, supporting the idea that the two phenomena share a common mechanism. The results are discussed within the framework of a capture model, which differentiates between context-based responding resulting from failures of cognitive control and context-based guessing.


Assuntos
Audição , Memória , Idoso , Envelhecimento , Humanos
5.
Mem Cognit ; 48(5): 870-883, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-31975029

RESUMO

Both languages are jointly activated in the bilingual brain, requiring bilinguals to select the target language while avoiding interference from the unwanted language. This cross-language interference is similar to the within-language interference created by the Deese-Roediger-McDermott false memory paradigm (DRM; Roediger & McDermott, 1995, Journal of Experimental Psychology: Learning, Memory, and Cognition, 21[4], 803-814). Although the mechanisms mediating false memory in the DRM paradigm remain an area of investigation, two of the more prominent theories-implicit associative response (IAR) and fuzzy trace-provide frameworks for using the DRM paradigm to advance our understanding of bilingual language processing. Three studies are reported comparing accuracy of monolingual and bilingual participants on different versions of the DRM. Study 1 presented lists of phonological associates and found that bilinguals showed higher rates of false recognition than did monolinguals. Study 2 used the standard semantic variant of the task and found that bilinguals showed lower false recognition rates than did monolinguals. Study 3 replicated and extended the findings in Experiment 2 in another semantic version of the task presented to younger and older adult monolingual and bilingual participants. These results are discussed within the frameworks of IAR and fuzzy-trace theories as further explicating differences between monolingual and bilingual processing.


Assuntos
Idioma , Cognição , Humanos , Memória
6.
Behav Res Methods ; 52(4): 1795-1799, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-31993960

RESUMO

In everyday language processing, sentence context affects how readers and listeners process upcoming words. In experimental situations, it can be useful to identify words that are predicted to greater or lesser degrees by the preceding context. Here we report completion norms for 3085 English sentences, collected online using a written cloze procedure in which participants were asked to provide their best guess for the word completing a sentence. Sentences varied between eight and ten words in length. At least 100 unique participants contributed to each sentence. All responses were reviewed by human raters to mitigate the influence of mis-spellings and typographical errors. The responses provide a range of predictability values for 13,438 unique target words, 6790 of which appear in more than one sentence context. We also provide entropy values based on the relative predictability of multiple responses. A searchable set of norms is available at http://sentencenorms.net . Finally, we provide the code used to collate and organize the responses to facilitate additional analyses and future research projects.


Assuntos
Compreensão , Idioma , Humanos
7.
J Acoust Soc Am ; 144(6): 3437, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30599649

RESUMO

This paper presents an investigation of children's subglottal resonances (SGRs), the natural frequencies of the tracheo-bronchial acoustic system. A total of 43 children (31 male, 12 female) aged between 6 and 18 yr were recorded. Both microphone signals of various consonant-vowel-consonant words and subglottal accelerometer signals of the sustained vowel /ɑ/ were recorded for each of the children, along with age and standing height. The first three SGRs of each child were measured from the sustained vowel subglottal accelerometer signals. A model relating SGRs to standing height was developed based on the quarter-wavelength resonator model, previously developed for adult SGRs and heights. Based on difficulties in predicting the higher SGR values for the younger children, the model of the third SGR was refined to account for frequency-dependent acoustic lengths of the tracheo-bronchial system. This updated model more accurately estimates both adult and child SGRs based on their heights. These results indicate the importance of considering frequency-dependent acoustic lengths of the subglottal system.

8.
Ear Hear ; 37 Suppl 1: 62S-8S, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27355772

RESUMO

One goal of the present study was to establish whether providing younger and older adults with visual speech information (both seeing and hearing a talker compared with listening alone) would reduce listening effort for understanding speech in noise. In addition, we used an individual differences approach to assess whether changes in listening effort were related to changes in visual enhancement-the improvement in speech understanding in going from an auditory-only (A-only) to an auditory-visual condition (AV) condition. To compare word recognition in A-only and AV modalities, younger and older adults identified words in both A-only and AV conditions in the presence of six-talker babble. Listening effort was assessed using a modified version of a serial recall task. Participants heard (A-only) or saw and heard (AV) a talker producing individual words without background noise. List presentation was stopped randomly and participants were then asked to repeat the last three words that were presented. Listening effort was assessed using recall performance in the two- and three-back positions. Younger, but not older, adults exhibited reduced listening effort as indexed by greater recall in the two- and three-back positions for the AV compared with the A-only presentations. For younger, but not older adults, changes in performance from the A-only to the AV condition were moderately correlated with visual enhancement. Results are discussed within a limited-resource model of both A-only and AV speech perception.


Assuntos
Ruído , Percepção da Fala , Percepção Visual , Estimulação Acústica , Adolescente , Fatores Etários , Idoso , Audiometria de Tons Puros , Percepção Auditiva , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estimulação Luminosa , Adulto Jovem
9.
Ear Hear ; 37 Suppl 1: 5S-27S, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27355771

RESUMO

The Fifth Eriksholm Workshop on "Hearing Impairment and Cognitive Energy" was convened to develop a consensus among interdisciplinary experts about what is known on the topic, gaps in knowledge, the use of terminology, priorities for future research, and implications for practice. The general term cognitive energy was chosen to facilitate the broadest possible discussion of the topic. It goes back to who described the effects of attention on perception; he used the term psychic energy for the notion that limited mental resources can be flexibly allocated among perceptual and mental activities. The workshop focused on three main areas: (1) theories, models, concepts, definitions, and frameworks; (2) methods and measures; and (3) knowledge translation. We defined effort as the deliberate allocation of mental resources to overcome obstacles in goal pursuit when carrying out a task, with listening effort applying more specifically when tasks involve listening. We adapted Kahneman's seminal (1973) Capacity Model of Attention to listening and proposed a heuristically useful Framework for Understanding Effortful Listening (FUEL). Our FUEL incorporates the well-known relationship between cognitive demand and the supply of cognitive capacity that is the foundation of cognitive theories of attention. Our FUEL also incorporates a motivation dimension based on complementary theories of motivational intensity, adaptive gain control, and optimal performance, fatigue, and pleasure. Using a three-dimensional illustration, we highlight how listening effort depends not only on hearing difficulties and task demands but also on the listener's motivation to expend mental effort in the challenging situations of everyday life.


Assuntos
Atenção , Cognição , Perda Auditiva/psicologia , Percepção da Fala , Percepção Auditiva , Compreensão , Humanos
10.
J Acoust Soc Am ; 137(3): 1443-51, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25786955

RESUMO

The current work investigated the role of single vowels in talker normalization. Following initial training to identify six talkers from the isolated vowel /i/, participants were asked to identify vowels in three different conditions. In the blocked-talker conditions, the vowels were blocked by talker. In the mixed-talker conditions, vowels from all six talkers were presented in random order. The precursor mixed-talker conditions were identical to the mixed-talker conditions except that participants were provided with either a sample vowel or just the written name of a talker before target-vowel presentation. In experiment 1, the precursor vowel was always spoken by the same talker as the target vowel. Identification accuracy did not differ significantly for the blocked and precursor mixed-talker conditions and both were better than the pure mixed-talker condition. In experiment 2, half of the trials had a precursor spoken by the same talker as the target and half had a different talker. For the same-talker precursor condition, the results replicated those in experiment 1. In the different-talker precursor, no benefit was observed relative to the pure-mixed condition. In experiment 3, only the written name was presented as a precursor and no benefits were observed relative to the pure-mixed condition.


Assuntos
Sinais (Psicologia) , Reconhecimento Psicológico , Acústica da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Adolescente , Audiometria da Fala , Feminino , Humanos , Masculino , Mascaramento Perceptivo , Fonética , Adulto Jovem
11.
Psychophysiology ; 60(7): e14256, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-36734299

RESUMO

Pupillometry has a rich history in the study of perception and cognition. One perennial challenge is that the magnitude of the task-evoked pupil response diminishes over the course of an experiment, a phenomenon we refer to as a fatigue effect. Reducing fatigue effects may improve sensitivity to task effects and reduce the likelihood of confounds due to systematic physiological changes over time. In this paper, we investigated the degree to which fatigue effects could be ameliorated by experimenter intervention. In Experiment 1, we assigned participants to one of three groups-no breaks, kinetic breaks (playing with toys, but no social interaction), or chatting with a research assistant-and compared the pupil response across conditions. In Experiment 2, we additionally tested the effect of researcher observation. Only breaks including social interaction significantly reduced the fatigue of the pupil response across trials. However, in all conditions we found robust evidence for fatigue effects: that is, regardless of protocol, the task-evoked pupil response was substantially diminished (at least 60%) over the duration of the experiment. We account for the variance of fatigue effects in our pupillometry data using multiple common statistical modeling approaches (e.g., linear mixed-effects models of peak, mean, and baseline pupil diameters, as well as growth curve models of time-course data). We conclude that pupil attenuation is a predictable phenomenon that should be accommodated in our experimental designs and statistical models.


Assuntos
Fadiga , Pupila , Humanos , Pupila/fisiologia , Cognição/fisiologia
12.
J Acoust Soc Am ; 132(4): 2592-602, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23039452

RESUMO

This paper presents a large-scale study of subglottal resonances (SGRs) (the resonant frequencies of the tracheo-bronchial tree) and their relations to various acoustical and physiological characteristics of speakers. The paper presents data from a corpus of simultaneous microphone and accelerometer recordings of consonant-vowel-consonant (CVC) words embedded in a carrier phrase spoken by 25 male and 25 female native speakers of American English ranging in age from 18 to 24 yr. The corpus contains 17,500 utterances of 14 American English monophthongs, diphthongs, and the rhotic approximant [[inverted r]] in various CVC contexts. Only monophthongs are analyzed in this paper. Speaker height and age were also recorded. Findings include (1) normative data on the frequency distribution of SGRs for young adults, (2) the dependence of SGRs on height, (3) the lack of a correlation between SGRs and formants or the fundamental frequency, (4) a poor correlation of the first SGR with the second and third SGRs but a strong correlation between the second and third SGRs, and (5) a significant effect of vowel category on SGR frequencies, although this effect is smaller than the measurement standard deviations and therefore negligible for practical purposes.


Assuntos
Glote/fisiologia , Idioma , Fonação , Acústica da Fala , Qualidade da Voz , Acelerometria , Adolescente , Fatores Etários , Fenômenos Biomecânicos , Estatura , Feminino , Humanos , Masculino , Fatores Sexuais , Espectrografia do Som , Medida da Produção da Fala , Vibração , Adulto Jovem
13.
J Am Acad Audiol ; 23(8): 623-34, 2012 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-22967737

RESUMO

BACKGROUND: Patients seeking treatment for hearing-related communication difficulties are often disappointed with the eventual outcomes, even after they receive a hearing aid or a cochlear implant. One approach that audiologists have used to improve communication outcomes is to provide auditory training (AT), but compliance rates for completing AT programs are notoriously low. PURPOSE: The primary purpose of the investigation was to conduct a patient-based evaluation of the benefits of an AT program, I Hear What You Mean, in order to determine how the AT experience might be improved. A secondary purpose was to examine whether patient perceptions of the AT experience varied depending on whether they were trained with a single talker's voice or heard training materials from multiple talkers. RESEARCH DESIGN: Participants completed a 6 wk auditory training program and were asked to respond to a posttraining questionnaire. Half of the participants heard the training materials spoken by six different talkers, and half heard the materials produced by only one of the six talkers. STUDY SAMPLE: Participants included 78 adult hearing-aid users and 15 cochlear-implant users for a total of 93 participants who completed the study, ages 18 to 89 yr (M = 66 yr, SD = 16.67 yr). Forty-three females and 50 males participated. The mean better ear pure-tone average for the participants was 56 dB HL (SD = 25 dB). INTERVENTION: Participants completed the single- or multiple-talker version of the 6 wk computerized AT program, I Hear What You Mean, followed by completion of a posttraining questionnaire in order to rate the benefits of overall training and the training activities and to describe what they liked best and what they liked least. DATA COLLECTION AND ANALYSIS: After completing a 6 wk computerized AT program, participants completed a posttraining questionnaire. Seven-point Likert scaled responses to whether understanding spoken language had improved were converted to individualized z scores and analyzed for changes due to AT. Written responses were coded and categorized to consider both positive and negative subjective opinions of the AT program. Regression analyses were conducted to examine the relationship between perceived effort and perceived benefit and to identify factors that predict overall program enjoyment. RESULTS: Participants reported improvements in their abilities to recognize spoken language and in their self-confidence as a result of participating in AT. Few differences were observed between reports from those trained with one versus six different talkers. Correlations between perceived benefit and enjoyment were not significant, and only participant age added unique variance to predicting program enjoyment. CONCLUSIONS: Participants perceived AT to be beneficial. Perceived benefit did not correlate with perceived enjoyment. Compliance with computerized AT programs might be enhanced if patients have regular contact with a hearing professional and train with meaning-based materials. An unheralded benefit of AT may be an increased sense of control over the hearing loss. In future efforts, we might aim to make training more engaging and entertaining, and less tedious.


Assuntos
Implante Coclear/psicologia , Auxiliares de Audição/psicologia , Perda Auditiva/psicologia , Aceitação pelo Paciente de Cuidados de Saúde/psicologia , Psicoacústica , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Audiometria de Tons Puros , Implante Coclear/reabilitação , Feminino , Perda Auditiva/reabilitação , Humanos , Masculino , Pessoa de Meia-Idade , Satisfação do Paciente , Autoavaliação (Psicologia) , Inquéritos e Questionários , Adulto Jovem
14.
Front Psychol ; 13: 821044, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35651579

RESUMO

Several recent studies have demonstrated context-based, high-confidence misperceptions in hearing, referred to as false hearing. These studies have unanimously found that older adults are more susceptible to false hearing than are younger adults, which the authors have attributed to an age-related decline in the ability to inhibit the activation of a contextually predicted (but incorrect) response. However, no published work has investigated this activation-based account of false hearing. In the present study, younger and older adults listened to sentences in which the semantic context provided by the sentence was either unpredictive, highly predictive and valid, or highly predictive and misleading with relation to a sentence-final word in noise. Participants were tasked with clicking on one of four images to indicate which image depicted the sentence-final word in noise. We used eye-tracking to investigate how activation, as revealed in patterns of fixations, of different response options changed in real-time over the course of sentences. We found that both younger and older adults exhibited anticipatory activation of the target word when highly predictive contextual cues were available. When these contextual cues were misleading, younger adults were able to suppress the activation of the contextually predicted word to a greater extent than older adults. These findings are interpreted as evidence for an activation-based model of speech perception and for the role of inhibitory control in false hearing.

15.
J Alzheimers Dis ; 90(2): 749-759, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36189586

RESUMO

BACKGROUND: Difficulty understanding speech is a common complaint of older adults. In quiet, speech perception is often assumed to be relatively automatic. However, higher-level cognitive processes play a key role in successful communication in noise. Limited cognitive resources in adults with dementia may therefore hamper word recognition. OBJECTIVE: The goal of this study was to determine the impact of mild dementia on spoken word recognition in quiet and noise. METHODS: Participants were 53-86 years with (n = 16) or without (n = 32) dementia symptoms as classified by the Clinical Dementia Rating scale. Participants performed a word identification task with two levels of word difficulty (few and many similar sounding words) in quiet and in noise at two signal-to-noise ratios, +6 and +3 dB. Our hypothesis was that listeners with mild dementia symptoms would have more difficulty with speech perception in noise under conditions that tax cognitive resources. RESULTS: Listeners with mild dementia symptoms had poorer task accuracy in both quiet and noise, which held after accounting for differences in age and hearing level. Notably, even in quiet, adults with dementia symptoms correctly identified words only about 80% of the time. However, word difficulty was not a factor in task performance for either group. CONCLUSION: These results affirm the difficulty that listeners with mild dementia may have with spoken word recognition, both in quiet and in background noise, consistent with a role of cognitive resources in spoken word identification.


Assuntos
Demência , Percepção da Fala , Humanos , Idoso , Ruído , Demência/diagnóstico
16.
Psychon Bull Rev ; 29(1): 268-280, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34405386

RESUMO

In most contemporary activation-competition frameworks for spoken word recognition, candidate words compete against phonological "neighbors" with similar acoustic properties (e.g., "cap" vs. "cat"). Thus, recognizing words with more competitors should come at a greater cognitive cost relative to recognizing words with fewer competitors, due to increased demands for selecting the correct item and inhibiting incorrect candidates. Importantly, these processes should operate even in the absence of differences in accuracy. In the present study, we tested this proposal by examining differences in processing costs associated with neighborhood density for highly intelligible items presented in quiet. A second goal was to examine whether the cognitive demands associated with increased neighborhood density were greater for older adults compared with young adults. Using pupillometry as an index of cognitive processing load, we compared the cognitive demands associated with spoken word recognition for words with many or fewer neighbors, presented in quiet, for young (n = 67) and older (n = 69) adult listeners. Growth curve analysis of the pupil data indicated that older adults showed a greater evoked pupil response for spoken words than did young adults, consistent with increased cognitive load during spoken word recognition. Words from dense neighborhoods were marginally more demanding to process than words from sparse neighborhoods. There was also an interaction between age and neighborhood density, indicating larger effects of density in young adult listeners. These results highlight the importance of assessing both cognitive demands and accuracy when investigating the mechanisms underlying spoken word recognition.


Assuntos
Percepção da Fala , Idoso , Cognição , Humanos , Percepção da Fala/fisiologia , Adulto Jovem
17.
Ear Hear ; 32(6): 775-81, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21716112

RESUMO

OBJECTIVES: Although age-related declines in perceiving spoken language are well established, the primary focus of research has been on perception of phonemes, words, and sentences. In contrast, relatively few investigations have been directed at establishing the effects of age on the comprehension of extended spoken passages. Moreover, most previous work has used extreme-group designs in which the performance of a group of young adults is contrasted with that of a group of older adults and little if any information is available regarding changes in listening comprehension across the adult lifespan. Accordingly, the goals of the current investigation were to determine whether there are age differences in listening comprehension across the adult lifespan and, if so, whether similar trajectories are observed for age-related changes in auditory sensitivity and listening comprehension. DESIGN: This study used a cross-sectional lifespan design in which approximately 60 individuals in each of 7 decades, from age 20 to 89 yr (a total of 433 participants), were tested on three different measures of listening comprehension. In addition, we obtained measures of auditory sensitivity from all participants. RESULTS: Changes in auditory sensitivity across the adult lifespan exhibited the progressive high-frequency loss typical of age-related hearing impairment. Performance on the listening comprehension measures, however, demonstrated a very different pattern, with scores on all measures remaining relatively stable until age 65 to 70 yr, after which significant declines were observed. Follow-up analyses indicated that this same general pattern was observed across three different types of passages (lectures, interviews, and narratives) and three different question types (information, integration, and inference). Multiple regression analyses indicated that low-frequency pure-tone average was the single largest contributor to age-related variance in listening comprehension for individuals older than 65 yr, but that age accounted for significant variance even after controlling for auditory sensitivity. CONCLUSIONS: Results suggest that age-related reductions in auditory sensitivity account for a sizable portion of individual variance in listening comprehension that was observed across the adult lifespan. Other potential contributors including a possible role for age-related declines in perceptual and cognitive abilities are discussed. Clinically, the results suggest that amplification is likely to improve listening comprehension but that increased audibility alone may not be sufficient to maintain listening comprehension beyond age 65 and 70 yr. Additional research will be needed to identify potential target abilities for training or other rehabilitation procedures that could supplement sensory aids to provide additional improvements in listening comprehension.


Assuntos
Envelhecimento/fisiologia , Narração , Fonética , Presbiacusia/diagnóstico , Presbiacusia/fisiopatologia , Percepção da Fala/fisiologia , Adulto , Idoso , Idoso de 80 Anos ou mais , Audiometria de Tons Puros , Limiar Auditivo/fisiologia , Feminino , Testes Auditivos/métodos , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
18.
Ear Hear ; 32(5): 650-5, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21478751

RESUMO

OBJECTIVE: The purpose of the present study was to examine the effects of age and visual content on cross-modal enhancement of auditory speech detection. Visual content consisted of three clearly distinct types of visual information: an unaltered video clip of a talker's face, a low-contrast version of the same clip, and a mouth-like Lissajous figure. It was hypothesized that both young and older adults would exhibit reduced enhancement as visual content diverged from the original clip of the talker's face, but that the decrease would be greater for older participants. DESIGN: Nineteen young adults and 19 older adults were asked to detect a single spoken syllable (/ba/) in speech-shaped noise, and the level of the signal was adaptively varied to establish the signal-to-noise ratio (SNR) at threshold. There was an auditory-only baseline condition and three audiovisual conditions in which the syllable was accompanied by one of the three visual signals (the unaltered clip of the talker's face, the low-contrast version of that clip, or the Lissajous figure). For each audiovisual condition, the SNR at threshold was compared with the SNR at threshold for the auditory-only condition to measure the amount of cross-modal enhancement. RESULTS: Young adults exhibited significant cross-modal enhancement with all three types of visual stimuli, with the greatest amount of enhancement observed for the unaltered clip of the talker's face. Older adults, in contrast, exhibited significant cross-modal enhancement only with the unaltered face. CONCLUSIONS: Results of this study suggest that visual signal content affects cross-modal enhancement of speech detection in both young and older adults. They also support a hypothesized age-related deficit in processing low-contrast visual speech stimuli, even in older adults with normal contrast sensitivity.


Assuntos
Envelhecimento/fisiologia , Testes de Discriminação da Fala , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Audiometria de Tons Puros , Limiar Auditivo/fisiologia , Sensibilidades de Contraste/fisiologia , Humanos , Fonética , Estimulação Luminosa/métodos , Razão Sinal-Ruído , Adulto Jovem
19.
J Acoust Soc Am ; 130(3): 1663-72, 2011 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-21895103

RESUMO

Much research has explored how spoken word recognition is influenced by the architecture and dynamics of the mental lexicon (e.g., Luce and Pisoni, 1998; McClelland and Elman, 1986). A more recent question is whether the processes underlying word recognition are unique to the auditory domain, or whether visually perceived (lipread) speech may also be sensitive to the structure of the mental lexicon (Auer, 2002; Mattys, Bernstein, and Auer, 2002). The current research was designed to test the hypothesis that both aurally and visually perceived spoken words are isolated in the mental lexicon as a function of their modality-specific perceptual similarity to other words. Lexical competition (the extent to which perceptually similar words influence recognition of a stimulus word) was quantified using metrics that are well-established in the literature, as well as a statistical method for calculating perceptual confusability based on the phi-square statistic. Both auditory and visual spoken word recognition were influenced by modality-specific lexical competition as well as stimulus word frequency. These findings extend the scope of activation-competition models of spoken word recognition and reinforce the hypothesis (Auer, 2002; Mattys et al., 2002) that perceptual and cognitive properties underlying spoken word recognition are not specific to the auditory domain. In addition, the results support the use of the phi-square statistic as a better predictor of lexical competition than metrics currently used in models of spoken word recognition.


Assuntos
Vias Auditivas/fisiologia , Cognição , Idioma , Reconhecimento Psicológico , Acústica da Fala , Percepção da Fala , Vias Visuais/fisiologia , Estimulação Acústica , Adolescente , Audiometria da Fala , Feminino , Humanos , Masculino , Modelos Estatísticos , Estimulação Luminosa , Gravação em Vídeo , Adulto Jovem
20.
J Acoust Soc Am ; 130(4): 2108-15, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-21973365

RESUMO

Previous studies of subglottal resonances have reported findings based on relatively few subjects, and the relations between these resonances, subglottal anatomy, and models of subglottal acoustics are not well understood. In this study, accelerometer signals of subglottal acoustics recorded during sustained [a:] vowels of 50 adult native speakers (25 males, 25 females) of American English were analyzed. The study confirms that a simple uniform tube model of subglottal airways, closed at the glottis and open at the inferior end, is appropriate for describing subglottal resonances. The main findings of the study are (1) whereas the walls may be considered rigid in the frequency range of Sg2 and Sg3, they are yielding and resonant in the frequency range of Sg1, with a resulting ~4/3 increase in wave propagation velocity and, consequently, in the frequency of Sg1; (2) the "acoustic length" of the equivalent uniform tube varies between 18 and 23.5 cm, and is approximately equal to the height of the speaker divided by an empirically determined scaling factor; (3) trachea length can also be predicted by dividing height by another empirically determined scaling factor; and (4) differences between the subglottal resonances of males and females can be accounted for by height-related differences.


Assuntos
Fonação , Acústica da Fala , Traqueia/fisiologia , Adolescente , Adulto , Fenômenos Biomecânicos , Estatura , Feminino , Humanos , Masculino , Modelos Biológicos , Pressão , Fatores Sexuais , Espectrografia do Som , Fatores de Tempo , Traqueia/anatomia & histologia , Vibração , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA