Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 65
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Ear Hear ; 45(1): 164-173, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-37491715

RESUMEN

OBJECTIVES: Speech perception training can be a highly effective intervention to improve perception and language abilities in children who are deaf or hard of hearing. Most studies of speech perception training, however, only measure gains immediately following training. Only a minority of cases include a follow-up assessment after a period without training. A critical unanswered question was whether training-related benefits are retained for a period of time after training has stopped. A primary goal of this investigation was to determine whether children retained training-related benefits 4 to 6 weeks after they completed 16 hours of formal speech perception training. Training was comprised of either auditory or speechreading training, or a combination of both. Also important is to determine if "booster" training can help increase gains made during the initial intensive training period. Another goal of the study was to investigate the benefits of providing home-based booster training during the 4- to 6-week interval after the formal training ceased. The original investigation ( Tye-Murray et al. 2022 ) compared the effects of talker familiarity and the relative benefits of the different types of training. We predicted that the children who received no additional training would retain the gains after the completing the formal training. We also predicted that those children who completed the booster training would realize additional gains. DESIGN: Children, 6 to 12 years old, with hearing loss who had previously participated in the original randomized control study returned 4 to 6 weeks after the conclusion to take a follow-up speech perception assessment. The first group (n = 44) returned after receiving no formal intervention from the research team before the follow-up assessment. A second group of 40 children completed an additional 16 hours of speech perception training at home during a 4- to 6-week interval before the follow-up speech perception assessment. The home-based speech perception training was a continuation of the same training that was received in the laboratory formatted to work on a PC tablet with a portable speaker. The follow-up speech perception assessment included measures of listening and speechreading, with test items spoken by both familiar (trained) and unfamiliar (untrained) talkers. RESULTS: In the group that did not receive the booster training, follow-up testing showed retention for all gains that were obtained immediately following the laboratory-based training. The group that received booster training during the same interval also maintained the benefits from the formal training, with some indication of minor improvement. CONCLUSIONS: Clinically, the present findings are extremely encouraging; the group that did not receive home-based booster training retained the benefits obtained during the laboratory-based training regimen. Moreover, the results suggest that self-paced booster training maintained the relative training gains associated with talker familiarity and training type seen immediately following laboratory-based training. Future aural rehabilitation programs should include maintenance training at home to supplement the speech perception training conducted under more formal conditions at school or in the clinic.


Asunto(s)
Corrección de Deficiencia Auditiva , Sordera , Pérdida Auditiva , Percepción del Habla , Niño , Humanos , Pérdida Auditiva/rehabilitación , Lectura de los Labios , Corrección de Deficiencia Auditiva/métodos
2.
J Neurosci ; 42(3): 435-442, 2022 01 19.
Artículo en Inglés | MEDLINE | ID: mdl-34815317

RESUMEN

In everyday conversation, we usually process the talker's face as well as the sound of the talker's voice. Access to visual speech information is particularly useful when the auditory signal is degraded. Here, we used fMRI to monitor brain activity while adult humans (n = 60) were presented with visual-only, auditory-only, and audiovisual words. The audiovisual words were presented in quiet and in several signal-to-noise ratios. As expected, audiovisual speech perception recruited both auditory and visual cortex, with some evidence for increased recruitment of premotor cortex in some conditions (including in substantial background noise). We then investigated neural connectivity using psychophysiological interaction analysis with seed regions in both primary auditory cortex and primary visual cortex. Connectivity between auditory and visual cortices was stronger in audiovisual conditions than in unimodal conditions, including a wide network of regions in posterior temporal cortex and prefrontal cortex. In addition to whole-brain analyses, we also conducted a region-of-interest analysis on the left posterior superior temporal sulcus (pSTS), implicated in many previous studies of audiovisual speech perception. We found evidence for both activity and effective connectivity in pSTS for visual-only and audiovisual speech, although these were not significant in whole-brain analyses. Together, our results suggest a prominent role for cross-region synchronization in understanding both visual-only and audiovisual speech that complements activity in integrative brain regions like pSTS.SIGNIFICANCE STATEMENT In everyday conversation, we usually process the talker's face as well as the sound of the talker's voice. Access to visual speech information is particularly useful when the auditory signal is hard to understand (e.g., background noise). Prior work has suggested that specialized regions of the brain may play a critical role in integrating information from visual and auditory speech. Here, we show a complementary mechanism relying on synchronized brain activity among sensory and motor regions may also play a critical role. These findings encourage reconceptualizing audiovisual integration in the context of coordinated network activity.


Asunto(s)
Corteza Auditiva/fisiología , Lenguaje , Lectura de los Labios , Red Nerviosa/fisiología , Percepción del Habla/fisiología , Corteza Visual/fisiología , Percepción Visual/fisiología , Adulto , Anciano , Anciano de 80 o más Años , Corteza Auditiva/diagnóstico por imagen , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Red Nerviosa/diagnóstico por imagen , Corteza Visual/diagnóstico por imagen , Adulto Joven
3.
Mem Cognit ; 51(2): 273-289, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-35896764

RESUMEN

Prior research suggests that second language (L2) vocabulary learning often occurs through lexical inferencing (translations based on context), but there has been less emphasis on how lexical inferencing compares with other methods of L2 word learning. The present study compared lexical inferencing to simply studying word lists for L2 learning. A secondary goal was to determine whether any effect of inferencing is mediated by the generation effect of memory, a phenomenon wherein generated information (inferencing) is remembered better than obtained information (reading). Across four experiments, participants read English sentences with embedded Swahili words and were asked either to infer the word meaning using context or were provided with translations before reading the sentence (reading condition). In contrast to our initial hypotheses, the inference condition resulted in lower rates of retention compared with the reading condition. In addition, the data suggest a number of differences between lexical inferencing and the generation effect, that argue against the proposal that lexical inferencing operates as a type of generation effect.


Asunto(s)
Aprendizaje , Vocabulario , Humanos , Efecto de Cohortes , Lenguaje , Aprendizaje Verbal
4.
Ear Hear ; 43(1): 181-191, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34225318

RESUMEN

OBJECTIVES: Transfer appropriate processing (TAP) refers to a general finding that training gains are maximized when training and testing are conducted under the same conditions. The present study tested the extent to which TAP applies to speech perception training in children with hearing loss. Specifically, we assessed the benefits of computer-based speech perception training games for enhancing children's speech recognition by comparing three training groups: auditory training (AT), audiovisual training (AVT), and a combination of these two (AT/AVT). We also determined whether talker-specific training, as might occur when children train with the speech of a next year's classroom teacher, leads to better recognition of that talker's speech and if so, the extent to which training benefits generalize to untrained talkers. Consistent with TAP theory, we predicted that children would improve their ability to recognize the speech of the trained talker more than that of three untrained talkers and, depending on their training group, would improve more on an auditory-only (listening) or audiovisual (speechreading) speech perception assessment, that matched the type of training they received. We also hypothesized that benefit would generalize to untrained talkers and to test modalities in which they did not train, albeit to a lesser extent. DESIGN: Ninety-nine elementary school aged children with hearing loss were enrolled into a randomized control trial with a repeated measures A-A-B experimental mixed design in which children served as their own control for the assessment of overall benefit of a particular training type and three different groups of children yielded data for comparing the three types of training. We also assessed talker-specific learning and transfer of learning by including speech perception tests with stimuli spoken by the talker with whom a child trained and stimuli spoken by three talkers with whom the child did not train and by including speech perception tests that presented both auditory (listening) and audiovisual (speechreading) stimuli. Children received 16 hr of gamified training. The games provided word identification and connected speech comprehension training activities. RESULTS: Overall, children showed significant improvement in both their listening and speechreading performance. Consistent with TAP theory, children improved more on their trained talker than on the untrained talkers. Also consistent with TAP theory, the children who received AT improved more on the listening than the speechreading. However, children who received AVT improved on both types of assessment equally, which is not consistent with our predictions derived from a TAP perspective. Age, language level, and phonological awareness were either not predictive of training benefits or only negligibly so. CONCLUSIONS: The findings provide support for the practice of providing children who have hearing loss with structured speech perception training and suggest that future aural rehabilitation programs might include teacher-specific speech perception training to prepare children for an upcoming school year, especially since training will generalize to other talkers. The results also suggest that benefits of speech perception training were not significantly related to age, language level, or degree of phonological awareness. The findings are largely consistent with TAP theory, suggesting that the more aligned a training task is with the desired outcome, the more likely benefit will accrue.


Asunto(s)
Sordera , Pérdida Auditiva , Percepción del Habla , Niño , Computadores , Humanos , Lectura de los Labios , Habla
5.
Mem Cognit ; 50(7): 1414-1431, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35143034

RESUMEN

In a masked form priming lexical decision task, orthographically related word primes cause null or inhibitory priming relative to unrelated controls because of lexical competition between primes and targets, whereas orthographically related nonword primes lead to facilitation because nonwords are not lexically represented and hence do not evoke lexical competition. This prime lexicality effect (PLE) has been used as an index of new word lexicalization in the developing lexicon by using to-be-learned words and their orthographic neighbors as primes and targets, respectively. Experiment 1 confirmed an inhibitory effect of -46 ms among native English speakers and faciliatory effects of 52 ms by Japanese English learners without critical word training. In Experiment 2, Japanese English learners studied novel English words while performing a meaning-based, form-based, or no task during learning. Recall measures indicated a dissociation between these two types of processing, with a form-based task leading to greater recall of L2 words and a meaning-based task leading to greater recall of L1 words. Results indicated that all three learning conditions produced neither facilitation nor inhibition (null priming effect). Taken together, the results of the two experiments demonstrate that the PLE can occur in a second language (L2) and that the training procedure can yield at least partial lexicalization of new L2 words.


Asunto(s)
Lenguaje , Aprendizaje Verbal , Humanos , Inhibición Psicológica , Aprendizaje , Actividad Motora , Tiempo de Reacción/fisiología , Lectura , Aprendizaje Verbal/fisiología
6.
J Acoust Soc Am ; 152(6): 3216, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36586857

RESUMEN

Although it is clear that sighted listeners use both auditory and visual cues during speech perception, the manner in which multisensory information is combined is a matter of debate. One approach to measuring multisensory integration is to use variants of the McGurk illusion, in which discrepant auditory and visual cues produce auditory percepts that differ from those based on unimodal input. Not all listeners show the same degree of susceptibility to the McGurk illusion, and these individual differences are frequently used as a measure of audiovisual integration ability. However, despite their popularity, we join the voices of others in the field to argue that McGurk tasks are ill-suited for studying real-life multisensory speech perception: McGurk stimuli are often based on isolated syllables (which are rare in conversations) and necessarily rely on audiovisual incongruence that does not occur naturally. Furthermore, recent data show that susceptibility to McGurk tasks does not correlate with performance during natural audiovisual speech perception. Although the McGurk effect is a fascinating illusion, truly understanding the combined use of auditory and visual information during speech perception requires tasks that more closely resemble everyday communication: namely, words, sentences, and narratives with congruent auditory and visual speech cues.


Asunto(s)
Ilusiones , Percepción del Habla , Humanos , Percepción Visual , Lenguaje , Habla , Percepción Auditiva , Estimulación Luminosa , Estimulación Acústica
7.
Ear Hear ; 42(6): 1656-1667, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34320527

RESUMEN

OBJECTIVE: Spoken communication is better when one can see as well as hear the talker. Although age-related deficits in speech perception were observed, Tye-Murray and colleagues found that even when age-related deficits in audiovisual (AV) speech perception were observed, AV performance could be accurately predicted from auditory-only (A-only) and visual-only (V-only) performance, and that knowing individuals' ages did not increase the accuracy of prediction. This finding contradicts conventional wisdom, according to which age-related differences in AV speech perception are due to deficits in the integration of auditory and visual information, and our primary goal was to determine whether Tye-Murray et al.'s finding with a closed-set test generalizes to situations more like those in everyday life. A second goal was to test a new predictive model that has important implications for audiological assessment. DESIGN: Participants (N = 109; ages 22-93 years), previously studied by Tye-Murray et al., were administered our new, open-set Lex-List test to assess their auditory, visual, and audiovisual perception of individual words. All testing was conducted in six-talker babble (three males and three females) presented at approximately 62 dB SPL. The level of the audio for the Lex-List items, when presented, was approximately 59 dB SPL because pilot testing suggested that this signal-to-noise ratio would avoid ceiling performance under the AV condition. RESULTS: Multiple linear regression analyses revealed that A-only and V-only performance accounted for 87.9% of the variance in AV speech perception, and that the contribution of age failed to reach significance. Our new parabolic model accounted for even more (92.8%) of the variance in AV performance, and again, the contribution of age was not significant. Bayesian analyses revealed that for both linear and parabolic models, the present data were almost 10 times as likely to occur with a reduced model (without age) than with a full model (with age as a predictor). Furthermore, comparison of the two reduced models revealed that the data were more than 100 times as likely to occur with the parabolic model than with the linear regression model. CONCLUSIONS: The present results strongly support Tye-Murray et al.'s hypothesis that AV performance can be accurately predicted from unimodal performance and that knowing individuals' ages does not increase the accuracy of that prediction. Our results represent an important initial step in extending Tye-Murray et al.'s findings to situations more like those encountered in everyday communication. The accuracy with which speech perception was predicted in this study foreshadows a form of precision audiology in which determining individual strengths and weaknesses in unimodal and multimodal speech perception facilitates identification of targets for rehabilitative efforts aimed at recovering and maintaining speech perception abilities critical to the quality of an older adult's life.


Asunto(s)
Audiología , Percepción del Habla , Adulto , Anciano , Anciano de 80 o más Años , Teorema de Bayes , Femenino , Audición , Humanos , Masculino , Persona de Mediana Edad , Ruido , Percepción Visual , Adulto Joven
8.
Ear Hear ; 41(3): 549-560, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31453875

RESUMEN

OBJECTIVES: This study was designed to examine how speaking rate affects auditory-only, visual-only, and auditory-visual speech perception across the adult lifespan. In addition, the study examined the extent to which unimodal (auditory-only and visual-only) performance predicts auditory-visual performance across a range of speaking rates. The authors hypothesized significant Age × Rate interactions in all three modalities and that unimodal performance would account for a majority of the variance in auditory-visual speech perception for speaking rates that are both slower and faster than normal. DESIGN: Participants (N = 145), ranging in age from 22 to 92, were tested in conditions with auditory-only, visual-only, and auditory-visual presentations using a closed-set speech perception test. Five different speaking rates were presented in each modality: an unmodified (normal rate), two rates that were slower than normal, and two rates that were faster than normal. Signal to noise ratios were set individually to produce approximately 30% correct identification in the auditory-only condition and this signal to noise ratio was used in the auditory-only and auditory-visual conditions. RESULTS: Age × Rate interactions were observed for the fastest speaking rates in both the visual-only and auditory-visual conditions. Unimodal performance accounted for at least 60% of the variance in auditory-visual performance for all five speaking rates. CONCLUSIONS: The findings demonstrate that the disproportionate difficulty that older adults have with rapid speech for auditory-only presentations can also be observed with visual-only and auditory-visual presentations. Taken together, the present analyses of age and individual differences indicate a generalized age-related decline in the ability to understand speech produced at fast speaking rates. The finding that auditory-visual speech performance was almost entirely predicted by unimodal performance across all five speaking rates has important clinical implications for auditory-visual speech perception and the ability of older adults to use visual speech information to compensate for age-related hearing loss.


Asunto(s)
Percepción del Habla , Estimulación Acústica , Anciano , Percepción Auditiva , Humanos , Habla , Percepción Visual
9.
Mem Cognit ; 48(5): 870-883, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-31975029

RESUMEN

Both languages are jointly activated in the bilingual brain, requiring bilinguals to select the target language while avoiding interference from the unwanted language. This cross-language interference is similar to the within-language interference created by the Deese-Roediger-McDermott false memory paradigm (DRM; Roediger & McDermott, 1995, Journal of Experimental Psychology: Learning, Memory, and Cognition, 21[4], 803-814). Although the mechanisms mediating false memory in the DRM paradigm remain an area of investigation, two of the more prominent theories-implicit associative response (IAR) and fuzzy trace-provide frameworks for using the DRM paradigm to advance our understanding of bilingual language processing. Three studies are reported comparing accuracy of monolingual and bilingual participants on different versions of the DRM. Study 1 presented lists of phonological associates and found that bilinguals showed higher rates of false recognition than did monolinguals. Study 2 used the standard semantic variant of the task and found that bilinguals showed lower false recognition rates than did monolinguals. Study 3 replicated and extended the findings in Experiment 2 in another semantic version of the task presented to younger and older adult monolingual and bilingual participants. These results are discussed within the frameworks of IAR and fuzzy-trace theories as further explicating differences between monolingual and bilingual processing.


Asunto(s)
Lenguaje , Cognición , Humanos , Memoria
10.
Mem Cognit ; 48(8): 1403-1416, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-32671592

RESUMEN

A number of recent studies have shown that older adults are more susceptible to context-based misperceptions in hearing (Rogers, Jacoby, & Sommers, Psychology and Aging, 27, 33-45, 2012; Sommers, Morton, & Rogers, Remembering: Attributions, Processes, and Control in Human Memory [Essays in Honor of Larry Jacoby], pp. 269-284, 2015) than are young adults. One explanation for these age-related increases in what we term false hearing is that older adults are less able than young individuals to inhibit a prepotent response favored by context. A similar explanation has been proposed for demonstrations of age-related increases in false memory (Jacoby, Bishara, Hessels, & Toth, Journal of Experimental Psychology: General, 134, 131-148, 2005). The present study was designed to compare susceptibility to false hearing and false memory in a group of young and older adults. In Experiment 1, we replicated the findings of past studies demonstrating increased frequency of false hearing in older, relative to young, adults. In Experiment 2, we demonstrated older adults' increased susceptibility to false memory in the same sample. Importantly, we found that participants who were more prone to false hearing also tended to be more prone to false memory, supporting the idea that the two phenomena share a common mechanism. The results are discussed within the framework of a capture model, which differentiates between context-based responding resulting from failures of cognitive control and context-based guessing.


Asunto(s)
Audición , Memoria , Anciano , Envejecimiento , Humanos
11.
Behav Res Methods ; 52(4): 1795-1799, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-31993960

RESUMEN

In everyday language processing, sentence context affects how readers and listeners process upcoming words. In experimental situations, it can be useful to identify words that are predicted to greater or lesser degrees by the preceding context. Here we report completion norms for 3085 English sentences, collected online using a written cloze procedure in which participants were asked to provide their best guess for the word completing a sentence. Sentences varied between eight and ten words in length. At least 100 unique participants contributed to each sentence. All responses were reviewed by human raters to mitigate the influence of mis-spellings and typographical errors. The responses provide a range of predictability values for 13,438 unique target words, 6790 of which appear in more than one sentence context. We also provide entropy values based on the relative predictability of multiple responses. A searchable set of norms is available at http://sentencenorms.net . Finally, we provide the code used to collate and organize the responses to facilitate additional analyses and future research projects.


Asunto(s)
Comprensión , Lenguaje , Humanos
12.
Ear Hear ; 40(3): 517-528, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31026238

RESUMEN

OBJECTIVES: The overall goal of this study was to compare verbal and visuospatial working memory in children with normal hearing (NH) and with cochlear implants (CI). The main questions addressed by this study were (1) Does auditory deprivation result in global or domain-specific deficits in working memory in children with CIs compared with their NH age mates? (2) Does the potential for verbal recoding affect performance on measures of reasoning ability in children with CIs relative to their NH age mates? and (3) Is performance on verbal and visuospatial working memory tasks related to spoken receptive language level achieved by children with CIs? DESIGN: A total of 54 children ranging in age from 5 to 9 years participated; 25 children with CIs and 29 children with NH. Participants were tested on both simple and complex measures of verbal and visuospatial working memory. Vocabulary was assessed with the Peabody Picture Vocabulary Test (PPVT) and reasoning abilities with two subtests of the WISC-IV (Wechsler Intelligence Scale for Children, 4th edition): Picture Concepts (verbally mediated) and Matrix Reasoning (visuospatial task). Groups were compared on all measures using analysis of variance after controlling for age and maternal education. RESULTS: Children with CIs scored significantly lower than children with NH on measures of working memory, after accounting for age and maternal education. Differences between the groups were more apparent for verbal working memory compared with visuospatial working memory. For reasoning and vocabulary, the CI group scored significantly lower than the NH group for PPVT and WISC Picture Concepts but similar to NH age mates on WISC Matrix Reasoning. CONCLUSIONS: Results from this study suggest that children with CIs have deficits in working memory related to storing and processing verbal information in working memory. These deficits extend to receptive vocabulary and verbal reasoning and remain even after controlling for the higher maternal education level of the NH group. Their ability to store and process visuospatial information in working memory and complete reasoning tasks that minimize verbal labeling of stimuli more closely approaches performance of NH age mates.


Asunto(s)
Implantación Coclear , Sordera/rehabilitación , Memoria a Corto Plazo , Procesamiento Espacial , Estudios de Casos y Controles , Niño , Preescolar , Sordera/psicología , Femenino , Humanos , Masculino
13.
J Acoust Soc Am ; 144(6): 3437, 2018 12.
Artículo en Inglés | MEDLINE | ID: mdl-30599649

RESUMEN

This paper presents an investigation of children's subglottal resonances (SGRs), the natural frequencies of the tracheo-bronchial acoustic system. A total of 43 children (31 male, 12 female) aged between 6 and 18 yr were recorded. Both microphone signals of various consonant-vowel-consonant words and subglottal accelerometer signals of the sustained vowel /ɑ/ were recorded for each of the children, along with age and standing height. The first three SGRs of each child were measured from the sustained vowel subglottal accelerometer signals. A model relating SGRs to standing height was developed based on the quarter-wavelength resonator model, previously developed for adult SGRs and heights. Based on difficulties in predicting the higher SGR values for the younger children, the model of the third SGR was refined to account for frequency-dependent acoustic lengths of the tracheo-bronchial system. This updated model more accurately estimates both adult and child SGRs based on their heights. These results indicate the importance of considering frequency-dependent acoustic lengths of the subglottal system.

14.
Ear Hear ; 37 Suppl 1: 62S-8S, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27355772

RESUMEN

One goal of the present study was to establish whether providing younger and older adults with visual speech information (both seeing and hearing a talker compared with listening alone) would reduce listening effort for understanding speech in noise. In addition, we used an individual differences approach to assess whether changes in listening effort were related to changes in visual enhancement-the improvement in speech understanding in going from an auditory-only (A-only) to an auditory-visual condition (AV) condition. To compare word recognition in A-only and AV modalities, younger and older adults identified words in both A-only and AV conditions in the presence of six-talker babble. Listening effort was assessed using a modified version of a serial recall task. Participants heard (A-only) or saw and heard (AV) a talker producing individual words without background noise. List presentation was stopped randomly and participants were then asked to repeat the last three words that were presented. Listening effort was assessed using recall performance in the two- and three-back positions. Younger, but not older, adults exhibited reduced listening effort as indexed by greater recall in the two- and three-back positions for the AV compared with the A-only presentations. For younger, but not older adults, changes in performance from the A-only to the AV condition were moderately correlated with visual enhancement. Results are discussed within a limited-resource model of both A-only and AV speech perception.


Asunto(s)
Ruido , Percepción del Habla , Percepción Visual , Estimulación Acústica , Adolescente , Factores de Edad , Anciano , Audiometría de Tonos Puros , Percepción Auditiva , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estimulación Luminosa , Adulto Joven
15.
Ear Hear ; 37 Suppl 1: 5S-27S, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27355771

RESUMEN

The Fifth Eriksholm Workshop on "Hearing Impairment and Cognitive Energy" was convened to develop a consensus among interdisciplinary experts about what is known on the topic, gaps in knowledge, the use of terminology, priorities for future research, and implications for practice. The general term cognitive energy was chosen to facilitate the broadest possible discussion of the topic. It goes back to who described the effects of attention on perception; he used the term psychic energy for the notion that limited mental resources can be flexibly allocated among perceptual and mental activities. The workshop focused on three main areas: (1) theories, models, concepts, definitions, and frameworks; (2) methods and measures; and (3) knowledge translation. We defined effort as the deliberate allocation of mental resources to overcome obstacles in goal pursuit when carrying out a task, with listening effort applying more specifically when tasks involve listening. We adapted Kahneman's seminal (1973) Capacity Model of Attention to listening and proposed a heuristically useful Framework for Understanding Effortful Listening (FUEL). Our FUEL incorporates the well-known relationship between cognitive demand and the supply of cognitive capacity that is the foundation of cognitive theories of attention. Our FUEL also incorporates a motivation dimension based on complementary theories of motivational intensity, adaptive gain control, and optimal performance, fatigue, and pleasure. Using a three-dimensional illustration, we highlight how listening effort depends not only on hearing difficulties and task demands but also on the listener's motivation to expend mental effort in the challenging situations of everyday life.


Asunto(s)
Atención , Cognición , Pérdida Auditiva/psicología , Percepción del Habla , Percepción Auditiva , Comprensión , Humanos
17.
J Acoust Soc Am ; 137(3): 1443-51, 2015 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-25786955

RESUMEN

The current work investigated the role of single vowels in talker normalization. Following initial training to identify six talkers from the isolated vowel /i/, participants were asked to identify vowels in three different conditions. In the blocked-talker conditions, the vowels were blocked by talker. In the mixed-talker conditions, vowels from all six talkers were presented in random order. The precursor mixed-talker conditions were identical to the mixed-talker conditions except that participants were provided with either a sample vowel or just the written name of a talker before target-vowel presentation. In experiment 1, the precursor vowel was always spoken by the same talker as the target vowel. Identification accuracy did not differ significantly for the blocked and precursor mixed-talker conditions and both were better than the pure mixed-talker condition. In experiment 2, half of the trials had a precursor spoken by the same talker as the target and half had a different talker. For the same-talker precursor condition, the results replicated those in experiment 1. In the different-talker precursor, no benefit was observed relative to the pure-mixed condition. In experiment 3, only the written name was presented as a precursor and no benefits were observed relative to the pure-mixed condition.


Asunto(s)
Señales (Psicología) , Reconocimiento en Psicología , Acústica del Lenguaje , Percepción del Habla , Calidad de la Voz , Estimulación Acústica , Adolescente , Audiometría del Habla , Femenino , Humanos , Masculino , Enmascaramiento Perceptual , Fonética , Adulto Joven
18.
Psychophysiology ; 60(7): e14256, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-36734299

RESUMEN

Pupillometry has a rich history in the study of perception and cognition. One perennial challenge is that the magnitude of the task-evoked pupil response diminishes over the course of an experiment, a phenomenon we refer to as a fatigue effect. Reducing fatigue effects may improve sensitivity to task effects and reduce the likelihood of confounds due to systematic physiological changes over time. In this paper, we investigated the degree to which fatigue effects could be ameliorated by experimenter intervention. In Experiment 1, we assigned participants to one of three groups-no breaks, kinetic breaks (playing with toys, but no social interaction), or chatting with a research assistant-and compared the pupil response across conditions. In Experiment 2, we additionally tested the effect of researcher observation. Only breaks including social interaction significantly reduced the fatigue of the pupil response across trials. However, in all conditions we found robust evidence for fatigue effects: that is, regardless of protocol, the task-evoked pupil response was substantially diminished (at least 60%) over the duration of the experiment. We account for the variance of fatigue effects in our pupillometry data using multiple common statistical modeling approaches (e.g., linear mixed-effects models of peak, mean, and baseline pupil diameters, as well as growth curve models of time-course data). We conclude that pupil attenuation is a predictable phenomenon that should be accommodated in our experimental designs and statistical models.


Asunto(s)
Fatiga , Pupila , Humanos , Pupila/fisiología , Cognición/fisiología
19.
J Acoust Soc Am ; 132(4): 2592-602, 2012 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-23039452

RESUMEN

This paper presents a large-scale study of subglottal resonances (SGRs) (the resonant frequencies of the tracheo-bronchial tree) and their relations to various acoustical and physiological characteristics of speakers. The paper presents data from a corpus of simultaneous microphone and accelerometer recordings of consonant-vowel-consonant (CVC) words embedded in a carrier phrase spoken by 25 male and 25 female native speakers of American English ranging in age from 18 to 24 yr. The corpus contains 17,500 utterances of 14 American English monophthongs, diphthongs, and the rhotic approximant [[inverted r]] in various CVC contexts. Only monophthongs are analyzed in this paper. Speaker height and age were also recorded. Findings include (1) normative data on the frequency distribution of SGRs for young adults, (2) the dependence of SGRs on height, (3) the lack of a correlation between SGRs and formants or the fundamental frequency, (4) a poor correlation of the first SGR with the second and third SGRs but a strong correlation between the second and third SGRs, and (5) a significant effect of vowel category on SGR frequencies, although this effect is smaller than the measurement standard deviations and therefore negligible for practical purposes.


Asunto(s)
Glotis/fisiología , Lenguaje , Fonación , Acústica del Lenguaje , Calidad de la Voz , Acelerometría , Adolescente , Factores de Edad , Fenómenos Biomecánicos , Estatura , Femenino , Humanos , Masculino , Factores Sexuales , Espectrografía del Sonido , Medición de la Producción del Habla , Vibración , Adulto Joven
20.
J Am Acad Audiol ; 23(8): 623-34, 2012 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-22967737

RESUMEN

BACKGROUND: Patients seeking treatment for hearing-related communication difficulties are often disappointed with the eventual outcomes, even after they receive a hearing aid or a cochlear implant. One approach that audiologists have used to improve communication outcomes is to provide auditory training (AT), but compliance rates for completing AT programs are notoriously low. PURPOSE: The primary purpose of the investigation was to conduct a patient-based evaluation of the benefits of an AT program, I Hear What You Mean, in order to determine how the AT experience might be improved. A secondary purpose was to examine whether patient perceptions of the AT experience varied depending on whether they were trained with a single talker's voice or heard training materials from multiple talkers. RESEARCH DESIGN: Participants completed a 6 wk auditory training program and were asked to respond to a posttraining questionnaire. Half of the participants heard the training materials spoken by six different talkers, and half heard the materials produced by only one of the six talkers. STUDY SAMPLE: Participants included 78 adult hearing-aid users and 15 cochlear-implant users for a total of 93 participants who completed the study, ages 18 to 89 yr (M = 66 yr, SD = 16.67 yr). Forty-three females and 50 males participated. The mean better ear pure-tone average for the participants was 56 dB HL (SD = 25 dB). INTERVENTION: Participants completed the single- or multiple-talker version of the 6 wk computerized AT program, I Hear What You Mean, followed by completion of a posttraining questionnaire in order to rate the benefits of overall training and the training activities and to describe what they liked best and what they liked least. DATA COLLECTION AND ANALYSIS: After completing a 6 wk computerized AT program, participants completed a posttraining questionnaire. Seven-point Likert scaled responses to whether understanding spoken language had improved were converted to individualized z scores and analyzed for changes due to AT. Written responses were coded and categorized to consider both positive and negative subjective opinions of the AT program. Regression analyses were conducted to examine the relationship between perceived effort and perceived benefit and to identify factors that predict overall program enjoyment. RESULTS: Participants reported improvements in their abilities to recognize spoken language and in their self-confidence as a result of participating in AT. Few differences were observed between reports from those trained with one versus six different talkers. Correlations between perceived benefit and enjoyment were not significant, and only participant age added unique variance to predicting program enjoyment. CONCLUSIONS: Participants perceived AT to be beneficial. Perceived benefit did not correlate with perceived enjoyment. Compliance with computerized AT programs might be enhanced if patients have regular contact with a hearing professional and train with meaning-based materials. An unheralded benefit of AT may be an increased sense of control over the hearing loss. In future efforts, we might aim to make training more engaging and entertaining, and less tedious.


Asunto(s)
Implantación Coclear/psicología , Audífonos/psicología , Pérdida Auditiva/psicología , Aceptación de la Atención de Salud/psicología , Psicoacústica , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Audiometría de Tonos Puros , Implantación Coclear/rehabilitación , Femenino , Pérdida Auditiva/rehabilitación , Humanos , Masculino , Persona de Mediana Edad , Satisfacción del Paciente , Autoevaluación (Psicología) , Encuestas y Cuestionarios , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA