Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Otol Neurotol ; 39(8): 950-956, 2018 09.
Artículo en Inglés | MEDLINE | ID: mdl-30001284

RESUMEN

HYPOTHESIS: Persons with normal audiometric thresholds but excessive difficulty hearing in background noise will choose auditory training as a treatment option. BACKGROUND: Auditory training has traditionally been reserved for those with marked hearing loss. We investigated auditory training as a treatment option for those who have normal auditory thresholds but complain about hearing in noise-a population of patients for which no therapy or intervention currently exists. We also determined the willingness of this patient population to volunteer for a free auditory training program. METHODS: We administered a 14-item, telephone-based questionnaire to assess perceived difficulty hearing in noise and willingness to volunteer for auditory training. We developed questions to identify those who consistently reported difficulty hearing in noise, but not quiet. RESULTS: The 11,938-person database included 2,299 patients with pure-tone averages less than 25. A total of 474 of these patients completed our questionnaire, 135 of who had normal audiometric thresholds at all octave frequencies 0.25 to 8 kHz. We found that difficulty hearing in noise was a graded problem. Our approach to find consistent reports about hearing in noise showed that the majority of people who consistently had difficulty hearing in noise, but not quiet, were the most likely to try auditory training. CONCLUSIONS: While relatively few patients with both normal hearing thresholds and complaints of severe difficulty hearing in noise were in the database, these patients were generally willing to volunteer for auditory training. Our results provide evidence that many in this underserved population would volunteer for auditory training.


Asunto(s)
Umbral Auditivo/fisiología , Corrección de Deficiencia Auditiva , Audición/fisiología , Ruido , Aceptación de la Atención de Salud , Audiometría de Tonos Puros/métodos , Bases de Datos Factuales , Humanos , Encuestas y Cuestionarios
2.
Semin Hear ; 36(4): 263-72, 2015 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-27587913

RESUMEN

There has been considerable interest in measuring the perceptual effort required to understand speech, as well as to identify factors that might reduce such effort. In the current study, we investigated whether, in addition to improving speech intelligibility, auditory training also could reduce perceptual or listening effort. Perceptual effort was assessed using a modified version of the n-back memory task in which participants heard lists of words presented without background noise and were asked to continually update their memory of the three most recently presented words. Perceptual effort was indexed by memory for items in the three-back position immediately before, immediately after, and 3 months after participants completed the Computerized Learning Exercises for Aural Rehabilitation (clEAR), a 12-session computerized auditory training program. Immediate posttraining measures of perceptual effort indicated that participants could remember approximately one additional word compared to pretraining. Moreover, some training gains were retained at the 3-month follow-up, as indicated by significantly greater recall for the three-back item at the 3-month measurement than at pretest. There was a small but significant correlation between gains in intelligibility and gains in perceptual effort. The findings are discussed within the framework of a limited-capacity speech perception system.

3.
Psychon Bull Rev ; 22(4): 1048-53, 2015 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-25421408

RESUMEN

Individuals lip read themselves more accurately than they lip read others when only the visual speech signal is available (Tye-Murray et al., Psychonomic Bulletin & Review, 20, 115-119, 2013). This self-advantage for vision-only speech recognition is consistent with the common-coding hypothesis (Prinz, European Journal of Cognitive Psychology, 9, 129-154, 1997), which posits (1) that observing an action activates the same motor plan representation as actually performing that action and (2) that observing one's own actions activates motor plan representations more than the others' actions because of greater congruity between percepts and corresponding motor plans. The present study extends this line of research to audiovisual speech recognition by examining whether there is a self-advantage when the visual signal is added to the auditory signal under poor listening conditions. Participants were assigned to sub-groups for round-robin testing in which each participant was paired with every member of their subgroup, including themselves, serving as both talker and listener/observer. On average, the benefit participants obtained from the visual signal when they were the talker was greater than when the talker was someone else and also was greater than the benefit others obtained from observing as well as listening to them. Moreover, the self-advantage in audiovisual speech recognition was significant after statistically controlling for individual differences in both participants' ability to benefit from a visual speech signal and the extent to which their own visual speech signal benefited others. These findings are consistent with our previous finding of a self-advantage in lip reading and with the hypothesis of a common code for action perception and motor plan representation.


Asunto(s)
Lectura de los Labios , Ruido , Percepción del Habla/fisiología , Percepción Visual/fisiología , Adolescente , Adulto , Percepción Auditiva , Femenino , Humanos , Masculino , Percepción de Movimiento/fisiología , Habla/fisiología , Adulto Joven
4.
Psychon Bull Rev ; 20(1): 115-9, 2013 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-23132604

RESUMEN

Common-coding theory posits that (1) perceiving an action activates the same representations of motor plans that are activated by actually performing that action, and (2) because of individual differences in the ways that actions are performed, observing recordings of one's own previous behavior activates motor plans to an even greater degree than does observing someone else's behavior. We hypothesized that if observing oneself activates motor plans to a greater degree than does observing others, and if these activated plans contribute to perception, then people should be able to lipread silent video clips of their own previous utterances more accurately than they can lipread video clips of other talkers. As predicted, two groups of participants were able to lipread video clips of themselves, recorded more than two weeks earlier, significantly more accurately than video clips of others. These results suggest that visual input activates speech motor activity that links to word representations in the mental lexicon.


Asunto(s)
Lectura de los Labios , Reconocimiento Visual de Modelos/fisiología , Desempeño Psicomotor/fisiología , Percepción del Habla/fisiología , Adolescente , Adulto , Humanos , Autoimagen , Adulto Joven
5.
J Acoust Soc Am ; 123(5): 2858-66, 2008 May.
Artículo en Inglés | MEDLINE | ID: mdl-18529201

RESUMEN

The ability to integrate information across sensory channels is critical for both within- and between-modality speech processing. The present study evaluated the hypothesis that inter- and intramodal integration abilities are related, in young and older adults. Further, the investigation asked if intramodal integration (auditory+auditory), and intermodal integration (auditory+visual) resist changes as a function of either aging or the presence of hearing loss. Three groups of adults (young with normal hearing, older with normal hearing, and older with hearing loss) were asked to identify words in sentence context. Intramodal integration ability was assessed by presenting disjoint passbands of speech (550-750 and 1650-2250 Hz) to either ear. Integration was indexed by factoring monotic from dichotic scores to control for potential hearing- or age-related influences on absolute performance. Intermodal integration ability was assessed by presenting the auditory and visual signals. Integration was indexed by a measure based on probabilistic models of auditory-visual integration, termed integration enhancement. Results suggested that both types of integration ability are largely resistant to changes with age and hearing loss. In addition, intra- and intermodal integration were shown to be not correlated. As measured here, these findings suggest that there is not a common mechanism that accounts for both inter- and intramodal integration performance.


Asunto(s)
Envejecimiento/fisiología , Percepción Auditiva/fisiología , Comunicación , Procesamiento Automatizado de Datos/métodos , Relaciones Interpersonales , Percepción del Habla/fisiología , Adolescente , Adulto , Anciano , Oído/crecimiento & desarrollo , Oído/fisiología , Femenino , Trastornos de la Audición/fisiopatología , Pérdida Auditiva/fisiopatología , Humanos , Masculino , Modelos Biológicos , Probabilidad , Percepción Visual/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...