Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 48
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Ear Hear ; 45(1): 164-173, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-37491715

RESUMEN

OBJECTIVES: Speech perception training can be a highly effective intervention to improve perception and language abilities in children who are deaf or hard of hearing. Most studies of speech perception training, however, only measure gains immediately following training. Only a minority of cases include a follow-up assessment after a period without training. A critical unanswered question was whether training-related benefits are retained for a period of time after training has stopped. A primary goal of this investigation was to determine whether children retained training-related benefits 4 to 6 weeks after they completed 16 hours of formal speech perception training. Training was comprised of either auditory or speechreading training, or a combination of both. Also important is to determine if "booster" training can help increase gains made during the initial intensive training period. Another goal of the study was to investigate the benefits of providing home-based booster training during the 4- to 6-week interval after the formal training ceased. The original investigation ( Tye-Murray et al. 2022 ) compared the effects of talker familiarity and the relative benefits of the different types of training. We predicted that the children who received no additional training would retain the gains after the completing the formal training. We also predicted that those children who completed the booster training would realize additional gains. DESIGN: Children, 6 to 12 years old, with hearing loss who had previously participated in the original randomized control study returned 4 to 6 weeks after the conclusion to take a follow-up speech perception assessment. The first group (n = 44) returned after receiving no formal intervention from the research team before the follow-up assessment. A second group of 40 children completed an additional 16 hours of speech perception training at home during a 4- to 6-week interval before the follow-up speech perception assessment. The home-based speech perception training was a continuation of the same training that was received in the laboratory formatted to work on a PC tablet with a portable speaker. The follow-up speech perception assessment included measures of listening and speechreading, with test items spoken by both familiar (trained) and unfamiliar (untrained) talkers. RESULTS: In the group that did not receive the booster training, follow-up testing showed retention for all gains that were obtained immediately following the laboratory-based training. The group that received booster training during the same interval also maintained the benefits from the formal training, with some indication of minor improvement. CONCLUSIONS: Clinically, the present findings are extremely encouraging; the group that did not receive home-based booster training retained the benefits obtained during the laboratory-based training regimen. Moreover, the results suggest that self-paced booster training maintained the relative training gains associated with talker familiarity and training type seen immediately following laboratory-based training. Future aural rehabilitation programs should include maintenance training at home to supplement the speech perception training conducted under more formal conditions at school or in the clinic.


Asunto(s)
Corrección de Deficiencia Auditiva , Sordera , Pérdida Auditiva , Percepción del Habla , Niño , Humanos , Pérdida Auditiva/rehabilitación , Lectura de los Labios , Corrección de Deficiencia Auditiva/métodos
2.
J Neurosci ; 42(3): 435-442, 2022 01 19.
Artículo en Inglés | MEDLINE | ID: mdl-34815317

RESUMEN

In everyday conversation, we usually process the talker's face as well as the sound of the talker's voice. Access to visual speech information is particularly useful when the auditory signal is degraded. Here, we used fMRI to monitor brain activity while adult humans (n = 60) were presented with visual-only, auditory-only, and audiovisual words. The audiovisual words were presented in quiet and in several signal-to-noise ratios. As expected, audiovisual speech perception recruited both auditory and visual cortex, with some evidence for increased recruitment of premotor cortex in some conditions (including in substantial background noise). We then investigated neural connectivity using psychophysiological interaction analysis with seed regions in both primary auditory cortex and primary visual cortex. Connectivity between auditory and visual cortices was stronger in audiovisual conditions than in unimodal conditions, including a wide network of regions in posterior temporal cortex and prefrontal cortex. In addition to whole-brain analyses, we also conducted a region-of-interest analysis on the left posterior superior temporal sulcus (pSTS), implicated in many previous studies of audiovisual speech perception. We found evidence for both activity and effective connectivity in pSTS for visual-only and audiovisual speech, although these were not significant in whole-brain analyses. Together, our results suggest a prominent role for cross-region synchronization in understanding both visual-only and audiovisual speech that complements activity in integrative brain regions like pSTS.SIGNIFICANCE STATEMENT In everyday conversation, we usually process the talker's face as well as the sound of the talker's voice. Access to visual speech information is particularly useful when the auditory signal is hard to understand (e.g., background noise). Prior work has suggested that specialized regions of the brain may play a critical role in integrating information from visual and auditory speech. Here, we show a complementary mechanism relying on synchronized brain activity among sensory and motor regions may also play a critical role. These findings encourage reconceptualizing audiovisual integration in the context of coordinated network activity.


Asunto(s)
Corteza Auditiva/fisiología , Lenguaje , Lectura de los Labios , Red Nerviosa/fisiología , Percepción del Habla/fisiología , Corteza Visual/fisiología , Percepción Visual/fisiología , Adulto , Anciano , Anciano de 80 o más Años , Corteza Auditiva/diagnóstico por imagen , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Red Nerviosa/diagnóstico por imagen , Corteza Visual/diagnóstico por imagen , Adulto Joven
3.
Ear Hear ; 43(1): 181-191, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34225318

RESUMEN

OBJECTIVES: Transfer appropriate processing (TAP) refers to a general finding that training gains are maximized when training and testing are conducted under the same conditions. The present study tested the extent to which TAP applies to speech perception training in children with hearing loss. Specifically, we assessed the benefits of computer-based speech perception training games for enhancing children's speech recognition by comparing three training groups: auditory training (AT), audiovisual training (AVT), and a combination of these two (AT/AVT). We also determined whether talker-specific training, as might occur when children train with the speech of a next year's classroom teacher, leads to better recognition of that talker's speech and if so, the extent to which training benefits generalize to untrained talkers. Consistent with TAP theory, we predicted that children would improve their ability to recognize the speech of the trained talker more than that of three untrained talkers and, depending on their training group, would improve more on an auditory-only (listening) or audiovisual (speechreading) speech perception assessment, that matched the type of training they received. We also hypothesized that benefit would generalize to untrained talkers and to test modalities in which they did not train, albeit to a lesser extent. DESIGN: Ninety-nine elementary school aged children with hearing loss were enrolled into a randomized control trial with a repeated measures A-A-B experimental mixed design in which children served as their own control for the assessment of overall benefit of a particular training type and three different groups of children yielded data for comparing the three types of training. We also assessed talker-specific learning and transfer of learning by including speech perception tests with stimuli spoken by the talker with whom a child trained and stimuli spoken by three talkers with whom the child did not train and by including speech perception tests that presented both auditory (listening) and audiovisual (speechreading) stimuli. Children received 16 hr of gamified training. The games provided word identification and connected speech comprehension training activities. RESULTS: Overall, children showed significant improvement in both their listening and speechreading performance. Consistent with TAP theory, children improved more on their trained talker than on the untrained talkers. Also consistent with TAP theory, the children who received AT improved more on the listening than the speechreading. However, children who received AVT improved on both types of assessment equally, which is not consistent with our predictions derived from a TAP perspective. Age, language level, and phonological awareness were either not predictive of training benefits or only negligibly so. CONCLUSIONS: The findings provide support for the practice of providing children who have hearing loss with structured speech perception training and suggest that future aural rehabilitation programs might include teacher-specific speech perception training to prepare children for an upcoming school year, especially since training will generalize to other talkers. The results also suggest that benefits of speech perception training were not significantly related to age, language level, or degree of phonological awareness. The findings are largely consistent with TAP theory, suggesting that the more aligned a training task is with the desired outcome, the more likely benefit will accrue.


Asunto(s)
Sordera , Pérdida Auditiva , Percepción del Habla , Niño , Computadores , Humanos , Lectura de los Labios , Habla
4.
Ear Hear ; 42(6): 1656-1667, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34320527

RESUMEN

OBJECTIVE: Spoken communication is better when one can see as well as hear the talker. Although age-related deficits in speech perception were observed, Tye-Murray and colleagues found that even when age-related deficits in audiovisual (AV) speech perception were observed, AV performance could be accurately predicted from auditory-only (A-only) and visual-only (V-only) performance, and that knowing individuals' ages did not increase the accuracy of prediction. This finding contradicts conventional wisdom, according to which age-related differences in AV speech perception are due to deficits in the integration of auditory and visual information, and our primary goal was to determine whether Tye-Murray et al.'s finding with a closed-set test generalizes to situations more like those in everyday life. A second goal was to test a new predictive model that has important implications for audiological assessment. DESIGN: Participants (N = 109; ages 22-93 years), previously studied by Tye-Murray et al., were administered our new, open-set Lex-List test to assess their auditory, visual, and audiovisual perception of individual words. All testing was conducted in six-talker babble (three males and three females) presented at approximately 62 dB SPL. The level of the audio for the Lex-List items, when presented, was approximately 59 dB SPL because pilot testing suggested that this signal-to-noise ratio would avoid ceiling performance under the AV condition. RESULTS: Multiple linear regression analyses revealed that A-only and V-only performance accounted for 87.9% of the variance in AV speech perception, and that the contribution of age failed to reach significance. Our new parabolic model accounted for even more (92.8%) of the variance in AV performance, and again, the contribution of age was not significant. Bayesian analyses revealed that for both linear and parabolic models, the present data were almost 10 times as likely to occur with a reduced model (without age) than with a full model (with age as a predictor). Furthermore, comparison of the two reduced models revealed that the data were more than 100 times as likely to occur with the parabolic model than with the linear regression model. CONCLUSIONS: The present results strongly support Tye-Murray et al.'s hypothesis that AV performance can be accurately predicted from unimodal performance and that knowing individuals' ages does not increase the accuracy of that prediction. Our results represent an important initial step in extending Tye-Murray et al.'s findings to situations more like those encountered in everyday communication. The accuracy with which speech perception was predicted in this study foreshadows a form of precision audiology in which determining individual strengths and weaknesses in unimodal and multimodal speech perception facilitates identification of targets for rehabilitative efforts aimed at recovering and maintaining speech perception abilities critical to the quality of an older adult's life.


Asunto(s)
Audiología , Percepción del Habla , Adulto , Anciano , Anciano de 80 o más Años , Teorema de Bayes , Femenino , Audición , Humanos , Masculino , Persona de Mediana Edad , Ruido , Percepción Visual , Adulto Joven
5.
J Neurosci Res ; 98(9): 1800-1814, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32415883

RESUMEN

Deleterious age-related changes in the central auditory nervous system have been referred to as central age-related hearing impairment (ARHI) or central presbycusis. Central ARHI is often assumed to be the consequence of peripheral ARHI. However, it is possible that certain aspects of central ARHI are independent from peripheral ARHI. A confirmation of this possibility could lead to significant improvements in current rehabilitation practices. The major difficulty in addressing this issue arises from confounding factors, such as other age-related changes in both the cochlea and central non-auditory brain structures. Because gap detection is a common measure of central auditory temporal processing, and gap detection thresholds are less influenced by changes in other brain functions such as learning and memory, we investigated the potential relationship between age-related peripheral hearing loss (i.e., audiograms) and age-related changes in gap detection. Consistent with previous studies, a significant difference was found for gap detection thresholds between young and older adults. However, among older adults, no significant associations were observed between gap detection ability and several other independent variables including the pure tone audiogram average, the Wechsler Adult Intelligence Scale-Vocabulary score, gender, and age. Statistical analyses showed little or no contributions from these independent variables to gap detection thresholds. Thus, our data indicate that age-related decline in central temporal processing is largely independent of peripheral ARHI.


Asunto(s)
Percepción Auditiva/fisiología , Pérdida Auditiva Central/fisiopatología , Presbiacusia/fisiopatología , Adulto , Factores de Edad , Anciano , Anciano de 80 o más Años , Envejecimiento/fisiología , Umbral Auditivo , Cóclea/fisiopatología , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
6.
Ear Hear ; 41(3): 549-560, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31453875

RESUMEN

OBJECTIVES: This study was designed to examine how speaking rate affects auditory-only, visual-only, and auditory-visual speech perception across the adult lifespan. In addition, the study examined the extent to which unimodal (auditory-only and visual-only) performance predicts auditory-visual performance across a range of speaking rates. The authors hypothesized significant Age × Rate interactions in all three modalities and that unimodal performance would account for a majority of the variance in auditory-visual speech perception for speaking rates that are both slower and faster than normal. DESIGN: Participants (N = 145), ranging in age from 22 to 92, were tested in conditions with auditory-only, visual-only, and auditory-visual presentations using a closed-set speech perception test. Five different speaking rates were presented in each modality: an unmodified (normal rate), two rates that were slower than normal, and two rates that were faster than normal. Signal to noise ratios were set individually to produce approximately 30% correct identification in the auditory-only condition and this signal to noise ratio was used in the auditory-only and auditory-visual conditions. RESULTS: Age × Rate interactions were observed for the fastest speaking rates in both the visual-only and auditory-visual conditions. Unimodal performance accounted for at least 60% of the variance in auditory-visual performance for all five speaking rates. CONCLUSIONS: The findings demonstrate that the disproportionate difficulty that older adults have with rapid speech for auditory-only presentations can also be observed with visual-only and auditory-visual presentations. Taken together, the present analyses of age and individual differences indicate a generalized age-related decline in the ability to understand speech produced at fast speaking rates. The finding that auditory-visual speech performance was almost entirely predicted by unimodal performance across all five speaking rates has important clinical implications for auditory-visual speech perception and the ability of older adults to use visual speech information to compensate for age-related hearing loss.


Asunto(s)
Percepción del Habla , Estimulación Acústica , Anciano , Percepción Auditiva , Humanos , Habla , Percepción Visual
7.
J Child Lang ; 44(1): 185-215, 2017 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-26752548

RESUMEN

Adults use vision to perceive low-fidelity speech; yet how children acquire this ability is not well understood. The literature indicates that children show reduced sensitivity to visual speech from kindergarten to adolescence. We hypothesized that this pattern reflects the effects of complex tasks and a growth period with harder-to-utilize cognitive resources, not lack of sensitivity. We investigated sensitivity to visual speech in children via the phonological priming produced by low-fidelity (non-intact onset) auditory speech presented audiovisually (see dynamic face articulate consonant/rhyme b/ag; hear non-intact onset/rhyme: -b/ag) vs. auditorily (see still face; hear exactly same auditory input). Audiovisual speech produced greater priming from four to fourteen years, indicating that visual speech filled in the non-intact auditory onsets. The influence of visual speech depended uniquely on phonology and speechreading. Children - like adults - perceive speech onsets multimodally. Findings are critical for incorporating visual speech into developmental theories of speech perception.


Asunto(s)
Lectura de los Labios , Percepción del Habla/fisiología , Percepción Visual/fisiología , Adolescente , Percepción Auditiva/fisiología , Niño , Preescolar , Femenino , Humanos , Masculino , Habla
8.
Ear Hear ; 37(6): 623-633, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27438867

RESUMEN

OBJECTIVES: This research determined (1) how phonological priming of picture naming was affected by the mode (auditory-visual [AV] versus auditory), fidelity (intact versus nonintact auditory onsets), and lexical status (words versus nonwords) of speech stimuli in children with prelingual sensorineural hearing impairment (CHI) versus children with normal hearing (CNH) and (2) how the degree of HI, auditory word recognition, and age influenced results in CHI. Note that the AV stimuli were not the traditional bimodal input but instead they consisted of an intact consonant/rhyme in the visual track coupled to a nonintact onset/rhyme in the auditory track. Example stimuli for the word bag are (1) AV: intact visual (b/ag) coupled to nonintact auditory (-b/ag) and 2) auditory: static face coupled to the same nonintact auditory (-b/ag). The question was whether the intact visual speech would "restore or fill-in" the nonintact auditory speech in which case performance for the same auditory stimulus would differ depending on the presence/absence of visual speech. DESIGN: Participants were 62 CHI and 62 CNH whose ages had a group mean and group distribution akin to that in the CHI group. Ages ranged from 4 to 14 years. All participants met the following criteria: (1) spoke English as a native language, (2) communicated successfully aurally/orally, and (3) had no diagnosed or suspected disabilities other than HI and its accompanying verbal problems. The phonological priming of picture naming was assessed with the multimodal picture word task. RESULTS: Both CHI and CNH showed greater phonological priming from high than low-fidelity stimuli and from AV than auditory speech. These overall fidelity and mode effects did not differ in the CHI versus CNH-thus these CHI appeared to have sufficiently well-specified phonological onset representations to support priming, and visual speech did not appear to be a disproportionately important source of the CHI's phonological knowledge. Two exceptions occurred, however. First-with regard to lexical status-both the CHI and CNH showed significantly greater phonological priming from the nonwords than words, a pattern consistent with the prediction that children are more aware of phonetics-phonology content for nonwords. This overall pattern of similarity between the groups was qualified by the finding that CHI showed more nearly equal priming by the high- versus low-fidelity nonwords than the CNH; in other words, the CHI were less affected by the fidelity of the auditory input for nonwords. Second, auditory word recognition-but not degree of HI or age-uniquely influenced phonological priming by the AV nonwords. CONCLUSIONS: With minor exceptions, phonological priming in CHI and CNH showed more similarities than differences. Importantly, this research documented that the addition of visual speech significantly increased phonological priming in both groups. Clinically these data support intervention programs that view visual speech as a powerful asset for developing spoken language in CHI.


Asunto(s)
Estimulación Acústica , Pérdida Auditiva Sensorineural/fisiopatología , Estimulación Luminosa , Memoria Implícita , Vocabulario , Adolescente , Estudios de Casos y Controles , Niño , Preescolar , Femenino , Humanos , Masculino , Fonética
9.
J Exp Child Psychol ; 126: 295-312, 2014 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-24974346

RESUMEN

We investigated whether visual speech fills in non-intact auditory speech (excised consonant onsets) in typically developing children from 4 to 14 years of age. Stimuli with the excised auditory onsets were presented in the audiovisual (AV) and auditory-only (AO) modes. A visual speech fill-in effect occurs when listeners experience hearing the same non-intact auditory stimulus (e.g., /-b/ag) as different depending on the presence/absence of visual speech such as hearing /bag/ in the AV mode but hearing /ag/ in the AO mode. We quantified the visual speech fill-in effect by the difference in the number of correct consonant onset responses between the modes. We found that easy visual speech cues /b/ provided greater filling in than difficult cues /g/. Only older children benefited from difficult visual speech cues, whereas all children benefited from easy visual speech cues, although 4- and 5-year-olds did not benefit as much as older children. To explore task demands, we compared results on our new task with those on the McGurk task. The influence of visual speech was uniquely associated with age and vocabulary abilities for the visual speech fill--in effect but was uniquely associated with speechreading skills for the McGurk effect. This dissociation implies that visual speech--as processed by children-is a complicated and multifaceted phenomenon underpinned by heterogeneous abilities. These results emphasize that children perceive a speaker's utterance rather than the auditory stimulus per se. In children, as in adults, there is more to speech perception than meets the ear.


Asunto(s)
Lectura de los Labios , Percepción del Habla , Habla , Estimulación Acústica , Adolescente , Factores de Edad , Percepción Auditiva , Niño , Preescolar , Señales (Psicología) , Femenino , Humanos , Masculino , Fonética , Percepción Visual
10.
Ear Hear ; 34(6): 753-62, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23782714

RESUMEN

OBJECTIVES: This research studied whether the mode of input (auditory versus audiovisual) influenced semantic access by speech in children with sensorineural hearing impairment (HI). DESIGN: Participants, 31 children with HI and 62 children with normal hearing (NH), were tested with the authors' new multimodal picture word task. Children were instructed to name pictures displayed on a monitor and ignore auditory or audiovisual speech distractors. The semantic content of the distractors was varied to be related versus unrelated to the pictures (e.g., picture distractor of dog-bear versus dog-cheese, respectively). In children with NH, picture-naming times were slower in the presence of semantically related distractors. This slowing, called semantic interference, is attributed to the meaning-related picture-distractor entries competing for selection and control of the response (the lexical selection by competition hypothesis). Recently, a modification of the lexical selection by competition hypothesis, called the competition threshold (CT) hypothesis, proposed that (1) the competition between the picture-distractor entries is determined by a threshold, and (2) distractors with experimentally reduced fidelity cannot reach the CT. Thus, semantically related distractors with reduced fidelity do not produce the normal interference effect, but instead no effect or semantic facilitation (faster picture naming times for semantically related versus unrelated distractors). Facilitation occurs because the activation level of the semantically related distractor with reduced fidelity (1) is not sufficient to exceed the CT and produce interference but (2) is sufficient to activate its concept, which then strengthens the activation of the picture and facilitates naming. This research investigated whether the proposals of the CT hypothesis generalize to the auditory domain, to the natural degradation of speech due to HI, and to participants who are children. Our multimodal picture word task allowed us to (1) quantify picture naming results in the presence of auditory speech distractors and (2) probe whether the addition of visual speech enriched the fidelity of the auditory input sufficiently to influence results. RESULTS: In the HI group, the auditory distractors produced no effect or a facilitative effect, in agreement with proposals of the CT hypothesis. In contrast, the audiovisual distractors produced the normal semantic interference effect. Results in the HI versus NH groups differed significantly for the auditory mode, but not for the audiovisual mode. CONCLUSIONS: This research indicates that the lower fidelity auditory speech associated with HI affects the normalcy of semantic access by children. Further, adding visual speech enriches the lower fidelity auditory input sufficiently to produce the semantic interference effect typical of children with NH.


Asunto(s)
Atención/fisiología , Pérdida Auditiva Sensorineural/fisiopatología , Aprendizaje/fisiología , Semántica , Habla/fisiología , Análisis de Varianza , Estudios de Casos y Controles , Niño , Preescolar , Femenino , Pérdida Auditiva Sensorineural/psicología , Humanos , Pruebas del Lenguaje , Masculino
11.
J Am Acad Audiol ; 23(8): 623-34, 2012 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-22967737

RESUMEN

BACKGROUND: Patients seeking treatment for hearing-related communication difficulties are often disappointed with the eventual outcomes, even after they receive a hearing aid or a cochlear implant. One approach that audiologists have used to improve communication outcomes is to provide auditory training (AT), but compliance rates for completing AT programs are notoriously low. PURPOSE: The primary purpose of the investigation was to conduct a patient-based evaluation of the benefits of an AT program, I Hear What You Mean, in order to determine how the AT experience might be improved. A secondary purpose was to examine whether patient perceptions of the AT experience varied depending on whether they were trained with a single talker's voice or heard training materials from multiple talkers. RESEARCH DESIGN: Participants completed a 6 wk auditory training program and were asked to respond to a posttraining questionnaire. Half of the participants heard the training materials spoken by six different talkers, and half heard the materials produced by only one of the six talkers. STUDY SAMPLE: Participants included 78 adult hearing-aid users and 15 cochlear-implant users for a total of 93 participants who completed the study, ages 18 to 89 yr (M = 66 yr, SD = 16.67 yr). Forty-three females and 50 males participated. The mean better ear pure-tone average for the participants was 56 dB HL (SD = 25 dB). INTERVENTION: Participants completed the single- or multiple-talker version of the 6 wk computerized AT program, I Hear What You Mean, followed by completion of a posttraining questionnaire in order to rate the benefits of overall training and the training activities and to describe what they liked best and what they liked least. DATA COLLECTION AND ANALYSIS: After completing a 6 wk computerized AT program, participants completed a posttraining questionnaire. Seven-point Likert scaled responses to whether understanding spoken language had improved were converted to individualized z scores and analyzed for changes due to AT. Written responses were coded and categorized to consider both positive and negative subjective opinions of the AT program. Regression analyses were conducted to examine the relationship between perceived effort and perceived benefit and to identify factors that predict overall program enjoyment. RESULTS: Participants reported improvements in their abilities to recognize spoken language and in their self-confidence as a result of participating in AT. Few differences were observed between reports from those trained with one versus six different talkers. Correlations between perceived benefit and enjoyment were not significant, and only participant age added unique variance to predicting program enjoyment. CONCLUSIONS: Participants perceived AT to be beneficial. Perceived benefit did not correlate with perceived enjoyment. Compliance with computerized AT programs might be enhanced if patients have regular contact with a hearing professional and train with meaning-based materials. An unheralded benefit of AT may be an increased sense of control over the hearing loss. In future efforts, we might aim to make training more engaging and entertaining, and less tedious.


Asunto(s)
Implantación Coclear/psicología , Audífonos/psicología , Pérdida Auditiva/psicología , Aceptación de la Atención de Salud/psicología , Psicoacústica , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Audiometría de Tonos Puros , Implantación Coclear/rehabilitación , Femenino , Pérdida Auditiva/rehabilitación , Humanos , Masculino , Persona de Mediana Edad , Satisfacción del Paciente , Autoevaluación (Psicología) , Encuestas y Cuestionarios , Adulto Joven
12.
Am J Audiol ; 31(3S): 905-913, 2022 Sep 21.
Artículo en Inglés | MEDLINE | ID: mdl-36037482

RESUMEN

PURPOSE: A digital therapeutic is a software-based intervention for a disease and/or disorder and often includes a daily, interactive curriculum and exercises; online support from a professional versed in the treatment base; and an online support community, typically active as a social chat group. Recently, the Consumer Technology Association published revised standards for digital therapeutics (DTx) that stipulate that a DTx must be evidence based and founded in scientific evidence showing effectiveness and must be supported by evidence showing improved patient satisfaction and adherence to an intervention. The purpose of this study was to investigate whether a DTx could help older adults better adjust to their hearing loss and acclimate to new hearing aids. METHOD: Thirty older adults with mild or moderate hearing loss who had never used hearing aids participated. All hearing aids were fitted remotely. Participants used a hearing health care DTx (Amptify) for 4 weeks, either immediately following receipt of the hearing aids or 4 weeks after the fitting. A control condition was watching closed caption television. Participants completed a satisfaction questionnaire that queried about their impressions of the DTx, which had items that included both a rating scale of 1-7 and open-ended questions. RESULTS: Ninety-six percent of the participants reported positive benefits, and one-half reported that the DTx helped them to adjust to their new hearing aids. They assigned a score of 5.8 to one of the questionnaire items that is similar to a Net Promoter Score Benefits, which included an enhanced ability to engage in conversation and increased listening confidence. CONCLUSION: This investigation provides scientific evidence to support the use of a hearing health care DTx, paving the way for audiologists to be able to more easily and efficiently incorporate follow-up aural rehabilitation into their routine clinical services and to be able to provide services remotely.


Asunto(s)
Audífonos , Pérdida Auditiva , Anciano , Audición , Pérdida Auditiva/rehabilitación , Humanos , Midazolam , Satisfacción del Paciente
13.
Ear Hear ; 32(6): 775-81, 2011.
Artículo en Inglés | MEDLINE | ID: mdl-21716112

RESUMEN

OBJECTIVES: Although age-related declines in perceiving spoken language are well established, the primary focus of research has been on perception of phonemes, words, and sentences. In contrast, relatively few investigations have been directed at establishing the effects of age on the comprehension of extended spoken passages. Moreover, most previous work has used extreme-group designs in which the performance of a group of young adults is contrasted with that of a group of older adults and little if any information is available regarding changes in listening comprehension across the adult lifespan. Accordingly, the goals of the current investigation were to determine whether there are age differences in listening comprehension across the adult lifespan and, if so, whether similar trajectories are observed for age-related changes in auditory sensitivity and listening comprehension. DESIGN: This study used a cross-sectional lifespan design in which approximately 60 individuals in each of 7 decades, from age 20 to 89 yr (a total of 433 participants), were tested on three different measures of listening comprehension. In addition, we obtained measures of auditory sensitivity from all participants. RESULTS: Changes in auditory sensitivity across the adult lifespan exhibited the progressive high-frequency loss typical of age-related hearing impairment. Performance on the listening comprehension measures, however, demonstrated a very different pattern, with scores on all measures remaining relatively stable until age 65 to 70 yr, after which significant declines were observed. Follow-up analyses indicated that this same general pattern was observed across three different types of passages (lectures, interviews, and narratives) and three different question types (information, integration, and inference). Multiple regression analyses indicated that low-frequency pure-tone average was the single largest contributor to age-related variance in listening comprehension for individuals older than 65 yr, but that age accounted for significant variance even after controlling for auditory sensitivity. CONCLUSIONS: Results suggest that age-related reductions in auditory sensitivity account for a sizable portion of individual variance in listening comprehension that was observed across the adult lifespan. Other potential contributors including a possible role for age-related declines in perceptual and cognitive abilities are discussed. Clinically, the results suggest that amplification is likely to improve listening comprehension but that increased audibility alone may not be sufficient to maintain listening comprehension beyond age 65 and 70 yr. Additional research will be needed to identify potential target abilities for training or other rehabilitation procedures that could supplement sensory aids to provide additional improvements in listening comprehension.


Asunto(s)
Envejecimiento/fisiología , Narración , Fonética , Presbiacusia/diagnóstico , Presbiacusia/fisiopatología , Percepción del Habla/fisiología , Adulto , Anciano , Anciano de 80 o más Años , Audiometría de Tonos Puros , Umbral Auditivo/fisiología , Femenino , Pruebas Auditivas/métodos , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
14.
Ear Hear ; 32(5): 650-5, 2011.
Artículo en Inglés | MEDLINE | ID: mdl-21478751

RESUMEN

OBJECTIVE: The purpose of the present study was to examine the effects of age and visual content on cross-modal enhancement of auditory speech detection. Visual content consisted of three clearly distinct types of visual information: an unaltered video clip of a talker's face, a low-contrast version of the same clip, and a mouth-like Lissajous figure. It was hypothesized that both young and older adults would exhibit reduced enhancement as visual content diverged from the original clip of the talker's face, but that the decrease would be greater for older participants. DESIGN: Nineteen young adults and 19 older adults were asked to detect a single spoken syllable (/ba/) in speech-shaped noise, and the level of the signal was adaptively varied to establish the signal-to-noise ratio (SNR) at threshold. There was an auditory-only baseline condition and three audiovisual conditions in which the syllable was accompanied by one of the three visual signals (the unaltered clip of the talker's face, the low-contrast version of that clip, or the Lissajous figure). For each audiovisual condition, the SNR at threshold was compared with the SNR at threshold for the auditory-only condition to measure the amount of cross-modal enhancement. RESULTS: Young adults exhibited significant cross-modal enhancement with all three types of visual stimuli, with the greatest amount of enhancement observed for the unaltered clip of the talker's face. Older adults, in contrast, exhibited significant cross-modal enhancement only with the unaltered face. CONCLUSIONS: Results of this study suggest that visual signal content affects cross-modal enhancement of speech detection in both young and older adults. They also support a hypothesized age-related deficit in processing low-contrast visual speech stimuli, even in older adults with normal contrast sensitivity.


Asunto(s)
Envejecimiento/fisiología , Pruebas de Discriminación del Habla , Percepción del Habla/fisiología , Percepción Visual/fisiología , Estimulación Acústica/métodos , Adulto , Anciano , Anciano de 80 o más Años , Audiometría de Tonos Puros , Umbral Auditivo/fisiología , Sensibilidad de Contraste/fisiología , Humanos , Fonética , Estimulación Luminosa/métodos , Relación Señal-Ruido , Adulto Joven
15.
Int J Audiol ; 50(11): 802-8, 2011 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-21929377

RESUMEN

OBJECTIVE: Our long-term objective is to develop an auditory training program that will enhance speech recognition in those situations where patients most want improvement. As a first step, the current investigation trained participants using either a single talker or multiple talkers to determine if auditory training leads to transfer-appropriate gains. DESIGN: The experiment implemented a 2 × 2 × 2 mixed design, with training condition as a between-participants variable and testing interval and test version as repeated-measures variables. Participants completed a computerized six-week auditory training program wherein they heard either the speech of a single talker or the speech of six talkers. Training gains were assessed with single-talker and multi-talker versions of the Four-choice discrimination test. Participants in both groups were tested on both versions. STUDY SAMPLE: Sixty-nine adult hearing-aid users were randomly assigned to either single-talker or multi-talker auditory training. RESULTS: Both groups showed significant gains on both test versions. Participants who trained with multiple talkers showed greater improvement on the multi-talker version whereas participants who trained with a single talker showed greater improvement on the single-talker version. CONCLUSION: Transfer-appropriate gains occurred following auditory training, suggesting that auditory training can be designed to target specific patient needs.


Asunto(s)
Corrección de Deficiencia Auditiva/métodos , Discriminación en Psicología , Pérdida Auditiva/rehabilitación , Ruido/efectos adversos , Enmascaramiento Perceptual , Personas con Deficiencia Auditiva/rehabilitación , Percepción del Habla , Estimulación Acústica , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Corrección de Deficiencia Auditiva/instrumentación , Femenino , Audífonos , Pérdida Auditiva/fisiopatología , Pérdida Auditiva/psicología , Humanos , Masculino , Persona de Mediana Edad , Missouri , Personas con Deficiencia Auditiva/psicología , Reconocimiento en Psicología , Pruebas de Discriminación del Habla , Factores de Tiempo , Resultado del Tratamiento , Adulto Joven
16.
Lang Speech Hear Serv Sch ; 52(4): 1049-1060, 2021 10 18.
Artículo en Inglés | MEDLINE | ID: mdl-34403290

RESUMEN

Purpose A meaning-oriented auditory training program for children who are deaf or hard of hearing (d/hh) was assessed with regard to its efficacy in promoting novel word learning. Method While administering the auditory training program, one of the authors (Elizabeth Mauzé) observed that children were learning words they previously did not know. Therefore, we systematically assessed vocabulary gains among 16 children. Most completed pretest, posttest, and retention versions of a picture-naming task in which they attempted to verbally identify 199 color pictures of words that would appear during training. Posttest and retention versions included both pictures used and not used during training in order to test generalization of associations between words and their referents. Importantly, each training session involved meaning-oriented, albeit simple, activities/games on a computer. Results At posttest, the percentage of word gain was 27.3% (SD = 12.5; confidence interval [CI] of the mean: 24.2-30.4) using trained pictures as cues and 25.9% (CI of the mean: 22.9-29.0) using untrained pictures as cues. An analysis of retention scores (for 13 of the participants who completed it weeks later) indicated strikingly high levels of retention for the words that had been learned. Conclusions These findings favor auditory training that is meaning oriented when it comes to the acquisition of different linguistic subsystems, lexis in this case. We also expand the discussion to include other evidence-based recommendations regarding how vocabulary is presented (input-based effects) and what learners are asked to do (task-based effects) as part of an overall effort to help children who are d/hh increase their vocabulary knowledge.


Asunto(s)
Pérdida Auditiva , Vocabulario , Niño , Audición , Pérdida Auditiva/terapia , Humanos , Lingüística , Aprendizaje Verbal
17.
Ear Hear ; 31(5): 636-44, 2010 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-20473178

RESUMEN

OBJECTIVE: The purpose of this investigation was to compare the ability of young and older adults to integrate auditory and visual sentence materials under conditions of good and poor signal clarity. The principle of inverse effectiveness (PoIE), which characterizes many neuronal and behavioral phenomena related to multisensory integration, asserts that as unimodal performance declines, integration is enhanced. Thus, the PoIE predicts that both young and older adults will show enhanced integration of auditory and visual speech stimuli when these stimuli are degraded. More importantly, because older adults' unimodal speech recognition skills decline in both the auditory and visual domains, the PoIE predicts that older adults will show enhanced integration during audiovisual speech recognition relative to younger adults. This study provides a test of these predictions. DESIGN: Fifty-three young and 53 older adults with normal hearing completed the closed-set Build-A-Sentence test and the CUNY Sentence test in a total of eight conditions; four unimodal and four audiovisual. In the unimodal conditions, stimuli were either auditory or visual and either easier or harder to perceive; the audiovisual conditions were formed from all the combinations of the unimodal signals. The hard visual signals were created by degrading video contrast, and the hard auditory signals were created by decreasing the signal to noise ratio. Scores from the unimodal and bimodal conditions were used to compute auditory enhancement and integration enhancement measures. RESULTS: Contrary to the PoIE, neither the auditory enhancement nor integration enhancement measures increased when signal clarity in the auditory or visual channel of audiovisual speech stimuli was decreased, nor was either measure higher for older adults than for young adults. In audiovisual conditions with easy visual stimuli, the integration enhancement measure for older adults was equivalent to that for young adults. However, in conditions with hard visual stimuli, integration enhancement for older adults was significantly lower than that for young adults. CONCLUSIONS: The present findings do not support extension of the PoIE to audiovisual speech recognition. Our results are not consistent with either the prediction that integration would be enhanced under conditions of poor signal clarity or the prediction that older adults would show enhanced integration, relative to young adults. Although there is a considerable controversy with regard to the best way to measure audiovisual integration, the fact that two of the most prominent measures, auditory enhancement and integration enhancement, both yielded results inconsistent with the PoIE, strongly suggests that the integration of audiovisual speech stimuli differs in some fundamental way from the integration of other bimodal stimuli. The results also suggest that aging does not impair integration enhancement when the visual speech signal has good clarity, but may affect it when the visual speech signal has poor clarity.


Asunto(s)
Envejecimiento/fisiología , Presbiacusia/fisiopatología , Percepción del Habla/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Humanos , Lectura de los Labios , Estimulación Luminosa , Psicoacústica , Adulto Joven
18.
Ear Hear ; 30(4): 475-84, 2009 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-19494776

RESUMEN

OBJECTIVES: The goals of this investigation were to gauge how hearing loss affects the self-perceived job performance and psycho-emotional status of professionals in the workforce and to develop a profile of their aural rehabilitation needs. DESIGN: Forty-eight participants who had at least a high school education and who hold salaried positions participated in one of seven focus groups. Participants first answered questions about a hypothetical executive who had hearing loss and considered how she might react to various communication issues. They then addressed questions about their own work-related predicaments. The sessions were audiovideo recorded and later transcribed for analysis. RESULTS: Unlike workers who have occupational hearing loss, the professionals in this investigation seem not to experience an inordinate degree of stigmatization in their workplaces, although most believe that hearing loss has negatively affected their job performance. Some of the participants believe that they have lost their "competitive edge," and some believe that they have been denied promotions because of hearing loss. However, most report that they have overcome their hearing-related difficulties by various means, and many have developed a determination and stamina to remain active in the workforce. The majority of the participants seemed to be unfamiliar with the Americans with Disability Act, Public Law 101-336. The overriding theme to emerge is that professionals desire to maintain their competency to perform their jobs and will do what they have to do to "get the job done." CONCLUSIONS: The situations of professionals who have hearing loss can be modeled, with a central theme of maintaining job competency or a competitive edge. It is hypothesized that five factors affect professionals' abilities to continue their optimal work performance in the face of hearing loss: (a) self-concept and sense of internal locus of control, (b) use of hearing assistive technology, (c) supervisor's and co-workers' perceptions and the provision of accommodations in the workplace, (d) use of effective coping strategies, and (e) communication difficulties and problem situations. The implications that the present findings hold for aural rehabilitation intervention plans are considered, and a problem-solving approach is reviewed.


Asunto(s)
Adaptación Psicológica , Emociones , Empleo , Pérdida Auditiva/psicología , Pérdida Auditiva/rehabilitación , Adulto , Anciano , Barreras de Comunicación , Escolaridad , Femenino , Grupos Focales , Humanos , Relaciones Interprofesionales , Masculino , Persona de Mediana Edad , Solución de Problemas
19.
J Exp Child Psychol ; 102(1): 40-59, 2009 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-18829049

RESUMEN

This research developed a multimodal picture-word task for assessing the influence of visual speech on phonological processing by 100 children between 4 and 14 years of age. We assessed how manipulation of seemingly to-be-ignored auditory (A) and audiovisual (AV) phonological distractors affected picture naming without participants consciously trying to respond to the manipulation. Results varied in complex ways as a function of age and type and modality of distractors. Results for congruent AV distractors yielded an inverted U-shaped function with a significant influence of visual speech in 4-year-olds and 10- to 14-year-olds but not in 5- to 9-year-olds. In concert with dynamic systems theory, we proposed that the temporary loss of sensitivity to visual speech was reflecting reorganization of relevant knowledge and processing subsystems, particularly phonology. We speculated that reorganization may be associated with (a) formal literacy instruction and (b) developmental changes in multimodal processing and auditory perceptual, linguistic, and cognitive skills.


Asunto(s)
Lectura de los Labios , Reconocimiento Visual de Modelos , Fonética , Semántica , Percepción del Habla , Conducta Verbal , Adolescente , Atención , Niño , Preescolar , Femenino , Humanos , Masculino , Tiempo de Reacción
20.
J Speech Lang Hear Res ; 52(2): 412-34, 2009 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-19339701

RESUMEN

PURPOSE: This research assessed the influence of visual speech on phonological processing by children with hearing loss (HL). METHOD: Children with HL and children with normal hearing (NH) named pictures while attempting to ignore auditory or audiovisual speech distractors whose onsets relative to the pictures were either congruent, conflicting in place of articulation, or conflicting in voicing-for example, the picture "pizza" coupled with the distractors "peach," "teacher," or "beast," respectively. Speed of picture naming was measured. RESULTS: The conflicting conditions slowed naming, and phonological processing by children with HL displayed the age-related shift in sensitivity to visual speech seen in children with NH, although with developmental delay. Younger children with HL exhibited a disproportionately large influence of visual speech and a negligible influence of auditory speech, whereas older children with HL showed a robust influence of auditory speech with no benefit to performance from adding visual speech. The congruent conditions did not speed naming in children with HL, nor did the addition of visual speech influence performance. Unexpectedly, the /wedge/-vowel congruent distractors slowed naming in children with HL and decreased articulatory proficiency. CONCLUSIONS: Results for the conflicting conditions are consistent with the hypothesis that speech representations in children with HL (a) are initially disproportionally structured in terms of visual speech and (b) become better specified with age in terms of auditorily encoded information.


Asunto(s)
Pérdida Auditiva/psicología , Psicolingüística , Percepción del Habla , Estimulación Acústica , Envejecimiento , Análisis de Varianza , Niño , Preescolar , Femenino , Humanos , Pruebas del Lenguaje , Lectura de los Labios , Masculino , Enmascaramiento Perceptual , Estimulación Luminosa , Análisis de Regresión
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA