Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 47
Filtrar
1.
Psychophysiology ; 60(7): e14256, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-36734299

RESUMEN

Pupillometry has a rich history in the study of perception and cognition. One perennial challenge is that the magnitude of the task-evoked pupil response diminishes over the course of an experiment, a phenomenon we refer to as a fatigue effect. Reducing fatigue effects may improve sensitivity to task effects and reduce the likelihood of confounds due to systematic physiological changes over time. In this paper, we investigated the degree to which fatigue effects could be ameliorated by experimenter intervention. In Experiment 1, we assigned participants to one of three groups-no breaks, kinetic breaks (playing with toys, but no social interaction), or chatting with a research assistant-and compared the pupil response across conditions. In Experiment 2, we additionally tested the effect of researcher observation. Only breaks including social interaction significantly reduced the fatigue of the pupil response across trials. However, in all conditions we found robust evidence for fatigue effects: that is, regardless of protocol, the task-evoked pupil response was substantially diminished (at least 60%) over the duration of the experiment. We account for the variance of fatigue effects in our pupillometry data using multiple common statistical modeling approaches (e.g., linear mixed-effects models of peak, mean, and baseline pupil diameters, as well as growth curve models of time-course data). We conclude that pupil attenuation is a predictable phenomenon that should be accommodated in our experimental designs and statistical models.


Asunto(s)
Fatiga , Pupila , Humanos , Pupila/fisiología , Cognición/fisiología
2.
J Acoust Soc Am ; 152(6): 3216, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36586857

RESUMEN

Although it is clear that sighted listeners use both auditory and visual cues during speech perception, the manner in which multisensory information is combined is a matter of debate. One approach to measuring multisensory integration is to use variants of the McGurk illusion, in which discrepant auditory and visual cues produce auditory percepts that differ from those based on unimodal input. Not all listeners show the same degree of susceptibility to the McGurk illusion, and these individual differences are frequently used as a measure of audiovisual integration ability. However, despite their popularity, we join the voices of others in the field to argue that McGurk tasks are ill-suited for studying real-life multisensory speech perception: McGurk stimuli are often based on isolated syllables (which are rare in conversations) and necessarily rely on audiovisual incongruence that does not occur naturally. Furthermore, recent data show that susceptibility to McGurk tasks does not correlate with performance during natural audiovisual speech perception. Although the McGurk effect is a fascinating illusion, truly understanding the combined use of auditory and visual information during speech perception requires tasks that more closely resemble everyday communication: namely, words, sentences, and narratives with congruent auditory and visual speech cues.


Asunto(s)
Ilusiones , Percepción del Habla , Humanos , Percepción Visual , Lenguaje , Habla , Percepción Auditiva , Estimulación Luminosa , Estimulación Acústica
3.
J Alzheimers Dis ; 90(2): 749-759, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36189586

RESUMEN

BACKGROUND: Difficulty understanding speech is a common complaint of older adults. In quiet, speech perception is often assumed to be relatively automatic. However, higher-level cognitive processes play a key role in successful communication in noise. Limited cognitive resources in adults with dementia may therefore hamper word recognition. OBJECTIVE: The goal of this study was to determine the impact of mild dementia on spoken word recognition in quiet and noise. METHODS: Participants were 53-86 years with (n = 16) or without (n = 32) dementia symptoms as classified by the Clinical Dementia Rating scale. Participants performed a word identification task with two levels of word difficulty (few and many similar sounding words) in quiet and in noise at two signal-to-noise ratios, +6 and +3 dB. Our hypothesis was that listeners with mild dementia symptoms would have more difficulty with speech perception in noise under conditions that tax cognitive resources. RESULTS: Listeners with mild dementia symptoms had poorer task accuracy in both quiet and noise, which held after accounting for differences in age and hearing level. Notably, even in quiet, adults with dementia symptoms correctly identified words only about 80% of the time. However, word difficulty was not a factor in task performance for either group. CONCLUSION: These results affirm the difficulty that listeners with mild dementia may have with spoken word recognition, both in quiet and in background noise, consistent with a role of cognitive resources in spoken word identification.


Asunto(s)
Demencia , Percepción del Habla , Humanos , Anciano , Ruido , Demencia/diagnóstico
4.
Front Psychol ; 13: 821044, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35651579

RESUMEN

Several recent studies have demonstrated context-based, high-confidence misperceptions in hearing, referred to as false hearing. These studies have unanimously found that older adults are more susceptible to false hearing than are younger adults, which the authors have attributed to an age-related decline in the ability to inhibit the activation of a contextually predicted (but incorrect) response. However, no published work has investigated this activation-based account of false hearing. In the present study, younger and older adults listened to sentences in which the semantic context provided by the sentence was either unpredictive, highly predictive and valid, or highly predictive and misleading with relation to a sentence-final word in noise. Participants were tasked with clicking on one of four images to indicate which image depicted the sentence-final word in noise. We used eye-tracking to investigate how activation, as revealed in patterns of fixations, of different response options changed in real-time over the course of sentences. We found that both younger and older adults exhibited anticipatory activation of the target word when highly predictive contextual cues were available. When these contextual cues were misleading, younger adults were able to suppress the activation of the contextually predicted word to a greater extent than older adults. These findings are interpreted as evidence for an activation-based model of speech perception and for the role of inhibitory control in false hearing.

5.
J Neurosci ; 42(3): 435-442, 2022 01 19.
Artículo en Inglés | MEDLINE | ID: mdl-34815317

RESUMEN

In everyday conversation, we usually process the talker's face as well as the sound of the talker's voice. Access to visual speech information is particularly useful when the auditory signal is degraded. Here, we used fMRI to monitor brain activity while adult humans (n = 60) were presented with visual-only, auditory-only, and audiovisual words. The audiovisual words were presented in quiet and in several signal-to-noise ratios. As expected, audiovisual speech perception recruited both auditory and visual cortex, with some evidence for increased recruitment of premotor cortex in some conditions (including in substantial background noise). We then investigated neural connectivity using psychophysiological interaction analysis with seed regions in both primary auditory cortex and primary visual cortex. Connectivity between auditory and visual cortices was stronger in audiovisual conditions than in unimodal conditions, including a wide network of regions in posterior temporal cortex and prefrontal cortex. In addition to whole-brain analyses, we also conducted a region-of-interest analysis on the left posterior superior temporal sulcus (pSTS), implicated in many previous studies of audiovisual speech perception. We found evidence for both activity and effective connectivity in pSTS for visual-only and audiovisual speech, although these were not significant in whole-brain analyses. Together, our results suggest a prominent role for cross-region synchronization in understanding both visual-only and audiovisual speech that complements activity in integrative brain regions like pSTS.SIGNIFICANCE STATEMENT In everyday conversation, we usually process the talker's face as well as the sound of the talker's voice. Access to visual speech information is particularly useful when the auditory signal is hard to understand (e.g., background noise). Prior work has suggested that specialized regions of the brain may play a critical role in integrating information from visual and auditory speech. Here, we show a complementary mechanism relying on synchronized brain activity among sensory and motor regions may also play a critical role. These findings encourage reconceptualizing audiovisual integration in the context of coordinated network activity.


Asunto(s)
Corteza Auditiva/fisiología , Lenguaje , Lectura de los Labios , Red Nerviosa/fisiología , Percepción del Habla/fisiología , Corteza Visual/fisiología , Percepción Visual/fisiología , Adulto , Anciano , Anciano de 80 o más Años , Corteza Auditiva/diagnóstico por imagen , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Red Nerviosa/diagnóstico por imagen , Corteza Visual/diagnóstico por imagen , Adulto Joven
6.
Psychon Bull Rev ; 29(1): 268-280, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-34405386

RESUMEN

In most contemporary activation-competition frameworks for spoken word recognition, candidate words compete against phonological "neighbors" with similar acoustic properties (e.g., "cap" vs. "cat"). Thus, recognizing words with more competitors should come at a greater cognitive cost relative to recognizing words with fewer competitors, due to increased demands for selecting the correct item and inhibiting incorrect candidates. Importantly, these processes should operate even in the absence of differences in accuracy. In the present study, we tested this proposal by examining differences in processing costs associated with neighborhood density for highly intelligible items presented in quiet. A second goal was to examine whether the cognitive demands associated with increased neighborhood density were greater for older adults compared with young adults. Using pupillometry as an index of cognitive processing load, we compared the cognitive demands associated with spoken word recognition for words with many or fewer neighbors, presented in quiet, for young (n = 67) and older (n = 69) adult listeners. Growth curve analysis of the pupil data indicated that older adults showed a greater evoked pupil response for spoken words than did young adults, consistent with increased cognitive load during spoken word recognition. Words from dense neighborhoods were marginally more demanding to process than words from sparse neighborhoods. There was also an interaction between age and neighborhood density, indicating larger effects of density in young adult listeners. These results highlight the importance of assessing both cognitive demands and accuracy when investigating the mechanisms underlying spoken word recognition.


Asunto(s)
Percepción del Habla , Anciano , Cognición , Humanos , Percepción del Habla/fisiología , Adulto Joven
7.
Lang Speech Hear Serv Sch ; 52(4): 1049-1060, 2021 10 18.
Artículo en Inglés | MEDLINE | ID: mdl-34403290

RESUMEN

Purpose A meaning-oriented auditory training program for children who are deaf or hard of hearing (d/hh) was assessed with regard to its efficacy in promoting novel word learning. Method While administering the auditory training program, one of the authors (Elizabeth Mauzé) observed that children were learning words they previously did not know. Therefore, we systematically assessed vocabulary gains among 16 children. Most completed pretest, posttest, and retention versions of a picture-naming task in which they attempted to verbally identify 199 color pictures of words that would appear during training. Posttest and retention versions included both pictures used and not used during training in order to test generalization of associations between words and their referents. Importantly, each training session involved meaning-oriented, albeit simple, activities/games on a computer. Results At posttest, the percentage of word gain was 27.3% (SD = 12.5; confidence interval [CI] of the mean: 24.2-30.4) using trained pictures as cues and 25.9% (CI of the mean: 22.9-29.0) using untrained pictures as cues. An analysis of retention scores (for 13 of the participants who completed it weeks later) indicated strikingly high levels of retention for the words that had been learned. Conclusions These findings favor auditory training that is meaning oriented when it comes to the acquisition of different linguistic subsystems, lexis in this case. We also expand the discussion to include other evidence-based recommendations regarding how vocabulary is presented (input-based effects) and what learners are asked to do (task-based effects) as part of an overall effort to help children who are d/hh increase their vocabulary knowledge.


Asunto(s)
Pérdida Auditiva , Vocabulario , Niño , Audición , Pérdida Auditiva/terapia , Humanos , Lingüística , Aprendizaje Verbal
8.
JAMA Otolaryngol Head Neck Surg ; 147(5): 442-449, 2021 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-33662120

RESUMEN

Importance: Accurate assessment of hearing is critically important regardless of a person's cognitive ability. The degree to which hearing can be reliably measured in adults with mild dementia has not been determined. Objective: To obtain quantitative measures of reliability to evaluate the degree to which audiologic testing can be accurately conducted in older adults with mild dementia. Design, Setting, and Participants: This repeated-measures cross-sectional study consisted of a comprehensive audiologic assessment on 2 occasions separated by 1 to 2 weeks performed in the department of otolaryngology at the Washington University School of Medicine from December 3, 2018, to March 4, 2020. Participants were 15 older adults with a verified diagnosis of mild dementia and 32 older adults without a verified diagnosis of mild dementia who were recruited from the Knight Alzheimer Disease Research Center at Washington University in St Louis. Main Outcomes and Measures: Test-retest reliability was assessed for tympanometry, acoustic reflex thresholds, otoacoustic emissions, hearing sensitivity, speech reception threshold, speech perception in noise, and hearing handicap, using standard clinical audiology measures. Results: A total of 47 older adults (26 women; mean [SD] age, 74.8 [6.0] years [range, 53-87 years]), including 32 with normal cognitive function and 15 with very mild or mild dementia, completed the study protocol. For participants with mild dementia, high test-retest reliability (Spearman ρ > 0.80) was found for most measures typically included in a comprehensive audiometric evaluation. For acoustic reflex thresholds, agreement was moderate to high, averaging approximately 83% across frequencies for both groups. Scores for the screening Hearing Handicap Inventory for the Elderly at time 1 and time 2 were highly correlated for the group with normal cognitive function (r = 0.84 [95% CI, 0.70-0.93]) and for the group with mild dementia (r = 0.96 [95% CI, 0.88-0.99]). For hearing thresholds, all rank-order correlations were above 0.80 with 95% CIs at or below 15% in width, with the exception of a moderate correlation of bone conduction thresholds at 500 Hz for the group with normal cognitive function (r = 0.69 [95% CI, 0.50-0.84]) and slightly wider 95% CIs for low-frequency bone conduction thresholds for both groups. For speech reception thresholds, correlations were high for groups with normal cognitive function (r = 0.91 [95% CI, 0.84-0.95]) and mild dementia (r = 0.83 [95% CI, 0.63-0.94]). Conclusions and Relevance: Test-retest reliability for hearing measures obtained from participants with mild dementia was comparable to that obtained from cognitively normal participants. These findings suggest that mild cognitive impairment does not preclude accurate audiologic assessment.


Asunto(s)
Audiometría/métodos , Demencia , Pérdida Auditiva/diagnóstico , Pruebas de Impedancia Acústica , Anciano , Anciano de 80 o más Años , Umbral Auditivo , Estudios Transversales , Evaluación de la Discapacidad , Femenino , Humanos , Masculino , Persona de Mediana Edad , Emisiones Otoacústicas Espontáneas , Reproducibilidad de los Resultados , Percepción del Habla
9.
Mem Cognit ; 48(8): 1403-1416, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-32671592

RESUMEN

A number of recent studies have shown that older adults are more susceptible to context-based misperceptions in hearing (Rogers, Jacoby, & Sommers, Psychology and Aging, 27, 33-45, 2012; Sommers, Morton, & Rogers, Remembering: Attributions, Processes, and Control in Human Memory [Essays in Honor of Larry Jacoby], pp. 269-284, 2015) than are young adults. One explanation for these age-related increases in what we term false hearing is that older adults are less able than young individuals to inhibit a prepotent response favored by context. A similar explanation has been proposed for demonstrations of age-related increases in false memory (Jacoby, Bishara, Hessels, & Toth, Journal of Experimental Psychology: General, 134, 131-148, 2005). The present study was designed to compare susceptibility to false hearing and false memory in a group of young and older adults. In Experiment 1, we replicated the findings of past studies demonstrating increased frequency of false hearing in older, relative to young, adults. In Experiment 2, we demonstrated older adults' increased susceptibility to false memory in the same sample. Importantly, we found that participants who were more prone to false hearing also tended to be more prone to false memory, supporting the idea that the two phenomena share a common mechanism. The results are discussed within the framework of a capture model, which differentiates between context-based responding resulting from failures of cognitive control and context-based guessing.


Asunto(s)
Audición , Memoria , Anciano , Envejecimiento , Humanos
10.
Behav Res Methods ; 52(4): 1795-1799, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-31993960

RESUMEN

In everyday language processing, sentence context affects how readers and listeners process upcoming words. In experimental situations, it can be useful to identify words that are predicted to greater or lesser degrees by the preceding context. Here we report completion norms for 3085 English sentences, collected online using a written cloze procedure in which participants were asked to provide their best guess for the word completing a sentence. Sentences varied between eight and ten words in length. At least 100 unique participants contributed to each sentence. All responses were reviewed by human raters to mitigate the influence of mis-spellings and typographical errors. The responses provide a range of predictability values for 13,438 unique target words, 6790 of which appear in more than one sentence context. We also provide entropy values based on the relative predictability of multiple responses. A searchable set of norms is available at http://sentencenorms.net . Finally, we provide the code used to collate and organize the responses to facilitate additional analyses and future research projects.


Asunto(s)
Comprensión , Lenguaje , Humanos
11.
Mem Cognit ; 48(5): 870-883, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-31975029

RESUMEN

Both languages are jointly activated in the bilingual brain, requiring bilinguals to select the target language while avoiding interference from the unwanted language. This cross-language interference is similar to the within-language interference created by the Deese-Roediger-McDermott false memory paradigm (DRM; Roediger & McDermott, 1995, Journal of Experimental Psychology: Learning, Memory, and Cognition, 21[4], 803-814). Although the mechanisms mediating false memory in the DRM paradigm remain an area of investigation, two of the more prominent theories-implicit associative response (IAR) and fuzzy trace-provide frameworks for using the DRM paradigm to advance our understanding of bilingual language processing. Three studies are reported comparing accuracy of monolingual and bilingual participants on different versions of the DRM. Study 1 presented lists of phonological associates and found that bilinguals showed higher rates of false recognition than did monolinguals. Study 2 used the standard semantic variant of the task and found that bilinguals showed lower false recognition rates than did monolinguals. Study 3 replicated and extended the findings in Experiment 2 in another semantic version of the task presented to younger and older adult monolingual and bilingual participants. These results are discussed within the frameworks of IAR and fuzzy-trace theories as further explicating differences between monolingual and bilingual processing.


Asunto(s)
Lenguaje , Cognición , Humanos , Memoria
12.
Collabra Psychol ; 6(1)2020.
Artículo en Inglés | MEDLINE | ID: mdl-34327298

RESUMEN

This study assessed the effects of age, word frequency, and background noise on the time course of lexical activation during spoken word recognition. Participants (41 young adults and 39 older adults) performed a visual world word recognition task while we monitored their gaze position. On each trial, four phonologically unrelated pictures appeared on the screen. A target word was presented auditorily following a carrier phrase ("Click on the ________"), at which point participants were instructed to use the mouse to click on the picture that corresponded to the target word. High- and low-frequency words were presented in quiet to half of the participants. The other half heard the words in a low level of noise in which the words were still readily identifiable. Results showed that, even in the absence of phonological competitors in the visual array, high-frequency words were fixated more quickly than low-frequency words by both listener groups. Young adults were generally faster to fixate on targets compared to older adults, but the pattern of interactions among noise, word frequency, and listener age showed that older adults' lexical activation largely matches that of young adults in a modest amount of noise.

13.
Neurobiol Lang (Camb) ; 1(4): 452-473, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-34327333

RESUMEN

Understanding spoken words requires the rapid matching of a complex acoustic stimulus with stored lexical representations. The degree to which brain networks supporting spoken word recognition are affected by adult aging remains poorly understood. In the current study we used fMRI to measure the brain responses to spoken words in two conditions: an attentive listening condition, in which no response was required, and a repetition task. Listeners were 29 young adults (aged 19-30 years) and 32 older adults (aged 65-81 years) without self-reported hearing difficulty. We found largely similar patterns of activity during word perception for both young and older adults, centered on the bilateral superior temporal gyrus. As expected, the repetition condition resulted in significantly more activity in areas related to motor planning and execution (including the premotor cortex and supplemental motor area) compared to the attentive listening condition. Importantly, however, older adults showed significantly less activity in probabilistically defined auditory cortex than young adults when listening to individual words in both the attentive listening and repetition tasks. Age differences in auditory cortex activity were seen selectively for words (no age differences were present for 1-channel vocoded speech, used as a control condition), and could not be easily explained by accuracy on the task, movement in the scanner, or hearing sensitivity (available on a subset of participants). These findings indicate largely similar patterns of brain activity for young and older adults when listening to words in quiet, but suggest less recruitment of auditory cortex by the older adults.

14.
Ear Hear ; 41(3): 549-560, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31453875

RESUMEN

OBJECTIVES: This study was designed to examine how speaking rate affects auditory-only, visual-only, and auditory-visual speech perception across the adult lifespan. In addition, the study examined the extent to which unimodal (auditory-only and visual-only) performance predicts auditory-visual performance across a range of speaking rates. The authors hypothesized significant Age × Rate interactions in all three modalities and that unimodal performance would account for a majority of the variance in auditory-visual speech perception for speaking rates that are both slower and faster than normal. DESIGN: Participants (N = 145), ranging in age from 22 to 92, were tested in conditions with auditory-only, visual-only, and auditory-visual presentations using a closed-set speech perception test. Five different speaking rates were presented in each modality: an unmodified (normal rate), two rates that were slower than normal, and two rates that were faster than normal. Signal to noise ratios were set individually to produce approximately 30% correct identification in the auditory-only condition and this signal to noise ratio was used in the auditory-only and auditory-visual conditions. RESULTS: Age × Rate interactions were observed for the fastest speaking rates in both the visual-only and auditory-visual conditions. Unimodal performance accounted for at least 60% of the variance in auditory-visual performance for all five speaking rates. CONCLUSIONS: The findings demonstrate that the disproportionate difficulty that older adults have with rapid speech for auditory-only presentations can also be observed with visual-only and auditory-visual presentations. Taken together, the present analyses of age and individual differences indicate a generalized age-related decline in the ability to understand speech produced at fast speaking rates. The finding that auditory-visual speech performance was almost entirely predicted by unimodal performance across all five speaking rates has important clinical implications for auditory-visual speech perception and the ability of older adults to use visual speech information to compensate for age-related hearing loss.


Asunto(s)
Percepción del Habla , Estimulación Acústica , Anciano , Percepción Auditiva , Humanos , Habla , Percepción Visual
15.
J Acoust Soc Am ; 144(6): 3437, 2018 12.
Artículo en Inglés | MEDLINE | ID: mdl-30599649

RESUMEN

This paper presents an investigation of children's subglottal resonances (SGRs), the natural frequencies of the tracheo-bronchial acoustic system. A total of 43 children (31 male, 12 female) aged between 6 and 18 yr were recorded. Both microphone signals of various consonant-vowel-consonant words and subglottal accelerometer signals of the sustained vowel /ɑ/ were recorded for each of the children, along with age and standing height. The first three SGRs of each child were measured from the sustained vowel subglottal accelerometer signals. A model relating SGRs to standing height was developed based on the quarter-wavelength resonator model, previously developed for adult SGRs and heights. Based on difficulties in predicting the higher SGR values for the younger children, the model of the third SGR was refined to account for frequency-dependent acoustic lengths of the tracheo-bronchial system. This updated model more accurately estimates both adult and child SGRs based on their heights. These results indicate the importance of considering frequency-dependent acoustic lengths of the subglottal system.

16.
Psychol Aging ; 32(6): 572-587, 2017 09.
Artículo en Inglés | MEDLINE | ID: mdl-28891669

RESUMEN

The presence of noise and interfering information can pose major difficulties during speech perception, particularly for older adults. Analogously, interference from similar representations during retrieval is a major cause of age-related memory failures. To demonstrate a suppression mechanism that underlies such speech and memory difficulties, we tested the hypothesis that interference between targets and competitors is resolved by suppressing competitors, thereby rendering them less intelligible in noise. In a series of experiments using a paradigm adapted from Healey, Hasher, and Campbell (2013), we presented a list of words that included target/competitor pairs of orthographically similar words (e.g., ALLERGY and ANALOGY). After a delay, participants solved fragments (e.g., A_L__GY), some of which resembled both members of the target/competitor pair, but could only be completed by the target. We then assessed the consequence of having successfully resolved this interference by asking participants to identify words in noise, some of which included the rejected competitor words from the previous phase. Consistent with a suppression account of interference resolution, younger adults reliably demonstrated reduced identification accuracy for competitors, indicating that they had effectively rejected, and therefore suppressed, competitors. In contrast, older adults showed a relative increase in accuracy for competitors relative to young adults. Such results suggest that older adults' reduced ability to suppress these representations resulted in sustained access to lexical traces, subsequently increasing perceptual identification of such items. We discuss these findings within the framework of inhibitory control theory in cognitive aging and its implications for age-related changes in speech perception. (PsycINFO Database Record


Asunto(s)
Envejecimiento/psicología , Memoria , Percepción del Habla , Adulto , Anciano , Femenino , Humanos , Masculino , Adulto Joven
17.
Ear Hear ; 37 Suppl 1: 5S-27S, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27355771

RESUMEN

The Fifth Eriksholm Workshop on "Hearing Impairment and Cognitive Energy" was convened to develop a consensus among interdisciplinary experts about what is known on the topic, gaps in knowledge, the use of terminology, priorities for future research, and implications for practice. The general term cognitive energy was chosen to facilitate the broadest possible discussion of the topic. It goes back to who described the effects of attention on perception; he used the term psychic energy for the notion that limited mental resources can be flexibly allocated among perceptual and mental activities. The workshop focused on three main areas: (1) theories, models, concepts, definitions, and frameworks; (2) methods and measures; and (3) knowledge translation. We defined effort as the deliberate allocation of mental resources to overcome obstacles in goal pursuit when carrying out a task, with listening effort applying more specifically when tasks involve listening. We adapted Kahneman's seminal (1973) Capacity Model of Attention to listening and proposed a heuristically useful Framework for Understanding Effortful Listening (FUEL). Our FUEL incorporates the well-known relationship between cognitive demand and the supply of cognitive capacity that is the foundation of cognitive theories of attention. Our FUEL also incorporates a motivation dimension based on complementary theories of motivational intensity, adaptive gain control, and optimal performance, fatigue, and pleasure. Using a three-dimensional illustration, we highlight how listening effort depends not only on hearing difficulties and task demands but also on the listener's motivation to expend mental effort in the challenging situations of everyday life.


Asunto(s)
Atención , Cognición , Pérdida Auditiva/psicología , Percepción del Habla , Percepción Auditiva , Comprensión , Humanos
18.
Ear Hear ; 37 Suppl 1: 62S-8S, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27355772

RESUMEN

One goal of the present study was to establish whether providing younger and older adults with visual speech information (both seeing and hearing a talker compared with listening alone) would reduce listening effort for understanding speech in noise. In addition, we used an individual differences approach to assess whether changes in listening effort were related to changes in visual enhancement-the improvement in speech understanding in going from an auditory-only (A-only) to an auditory-visual condition (AV) condition. To compare word recognition in A-only and AV modalities, younger and older adults identified words in both A-only and AV conditions in the presence of six-talker babble. Listening effort was assessed using a modified version of a serial recall task. Participants heard (A-only) or saw and heard (AV) a talker producing individual words without background noise. List presentation was stopped randomly and participants were then asked to repeat the last three words that were presented. Listening effort was assessed using recall performance in the two- and three-back positions. Younger, but not older, adults exhibited reduced listening effort as indexed by greater recall in the two- and three-back positions for the AV compared with the A-only presentations. For younger, but not older adults, changes in performance from the A-only to the AV condition were moderately correlated with visual enhancement. Results are discussed within a limited-resource model of both A-only and AV speech perception.


Asunto(s)
Ruido , Percepción del Habla , Percepción Visual , Estimulación Acústica , Adolescente , Factores de Edad , Anciano , Audiometría de Tonos Puros , Percepción Auditiva , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estimulación Luminosa , Adulto Joven
19.
Atten Percept Psychophys ; 78(1): 346-54, 2016 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-26474981

RESUMEN

Whereas the energetic and informational masking effects of unintelligible babble on auditory speech recognition are well established, the present study is the first to investigate its effects on visual speech recognition. Young and older adults performed two lipreading tasks while simultaneously experiencing either quiet, speech-shaped noise, or 6-talker background babble. Both words at the end of uninformative carrier sentences and key words in everyday sentences were harder to lipread in the presence of babble than in the presence of speech-shaped noise or quiet. Contrary to the inhibitory deficit hypothesis of cognitive aging, babble had equivalent effects on young and older adults. In a follow-up experiment, neither the babble nor the speech-shaped noise stimuli interfered with performance of a face-processing task, indicating that babble selectively interferes with visual speech recognition and not with visual perception tasks per se. The present results demonstrate that babble can produce cross-modal informational masking and suggest a breakdown in audiovisual scene analysis, either because of obligatory monitoring of even uninformative speech sounds or because of obligatory efforts to integrate speech sounds even with uncorrelated mouth movements.


Asunto(s)
Envejecimiento/psicología , Lectura de los Labios , Enmascaramiento Perceptual , Percepción del Habla , Percepción Visual , Estimulación Acústica/métodos , Adulto , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Masculino , Ruido , Fonética , Habla , Adulto Joven
20.
Psychol Aging ; 30(3): 634-46, 2015 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-26121287

RESUMEN

Audiovisual (AV) speech perception is the process by which auditory and visual sensory signals are integrated and used to understand what a talker is saying during face-to-face communication. This form of communication is markedly superior to speech perception in either sensory modality alone. However, there are additional lexical factors that are affected by age-related cognitive changes that may contribute to differences in AV perception. In the current study, we extended an existing model of spoken word identification to the AV domain, and examined the cognitive factors that contribute to age-related and individual differences in AV perception of words varying in lexical difficulty (i.e., on the basis of competing items). Young (n = 49) and older adults (n = 50) completed a series of cognitive inhibition tasks and a spoken word identification task. The words were presented in auditory-only, visual-only, and AV conditions, and were equally divided into lexically hard (words with many competitors) and lexically easy (words with few competitors). Overall, young adults demonstrated better inhibitory abilities and higher identification performance than older adults. However, whereas no relationship was observed between inhibitory abilities and AV word identification performance in young adults, there was a significant relationship between Stroop interference and AV identification of lexically hard words in older adults. These results are interpreted within the framework of existing models of spoken-word recognition with implications for how cognitive deficits in older adults contribute to speech perception.


Asunto(s)
Envejecimiento/psicología , Inhibición Psicológica , Percepción del Habla/fisiología , Percepción Visual/fisiología , Anciano , Cognición , Trastornos del Conocimiento/psicología , Comunicación , Femenino , Humanos , Individualidad , Lenguaje , Masculino , Test de Stroop , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA