Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Clin Exp Otorhinolaryngol ; 16(3): 217-224, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37080730

RESUMEN

OBJECTIVES: To train participants to localize sound using virtual reality (VR) technology, appropriate auditory stimuli that contain accurate spatial cues are essential. The generic head-related transfer function that grounds the programmed spatial audio in VR does not reflect individual variation in monaural spatial cues, which is critical for auditory spatial perception in patients with single-sided deafness (SSD). As binaural difference cues are unavailable, auditory spatial perception is a typical problem in the SSD population and warrants intervention. This study assessed the applicability of binaurally recorded auditory stimuli in VR-based training for sound localization in SSD patients. METHODS: Sixteen subjects with SSD and 38 normal-hearing (NH) controls underwent VR-based training for sound localization and were assessed 3 weeks after completing training. The VR program incorporated prerecorded auditory stimuli created individually in the SSD group and over an anthropometric model in the NH group. RESULTS: Sound localization performance revealed significant improvements in both groups after training, with retained benefits lasting for an additional 3 weeks. Subjective improvements in spatial hearing were confirmed in the SSD group. CONCLUSION: By examining individuals with SSD and NH, VR-based training for sound localization that used binaurally recorded stimuli, measured individually, was found to be effective and beneficial. Furthermore, VR-based training does not require sophisticated instruments or setups. These. RESULTS: suggest that this technique represents a new therapeutic treatment for impaired sound localization.

2.
Front Neurosci ; 14: 600839, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33328873

RESUMEN

Even though reciprocal inhibitory vestibular interactions following visual stimulation have been understood as sensory-reweighting mechanisms to stabilize motion perception; this hypothesis has not been thoroughly investigated with temporal dynamic measurements. Recently, virtual reality technology has been implemented in different medical domains. However, exposure in virtual reality environments can cause discomfort, including nausea or headache, due to visual-vestibular conflicts. We speculated that self-motion perception could be altered by accelerative visual motion stimulation in the virtual reality situation because of the absence of vestibular signals (visual-vestibular sensory conflict), which could result in the sickness. The current study investigated spatio-temporal profiles for motion perception using immersive virtual reality. We demonstrated alterations in neural dynamics under the sensory mismatch condition (accelerative visual motion stimulation) and in participants with high levels of sickness after driving simulation. Additionally, an event-related potentials study revealed that the high-sickness group presented with higher P3 amplitudes in sensory mismatch conditions, suggesting that it would be a substantial demand of cognitive resources for motion perception on sensory mismatch conditions.

3.
Geriatr Gerontol Int ; 17(1): 61-68, 2017 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-26628069

RESUMEN

AIMS: The goal of the present study was to develop an auditory training program using a mobile device and to test its efficacy by applying it to older adults suffering from moderate-to-severe sensorineural hearing loss. METHODS: Among the 20 elderly hearing-impaired listeners who participated, 10 were randomly assigned to a training group (TG) and 10 were assigned to a non-training group (NTG) as a control. As a baseline, all participants were measured by vowel, consonant and sentence tests. In the experiment, the TG had been trained for 4 weeks using a mobile program, which had four levels and consisted of 10 Korean nonsense syllables, with each level completed in 1 week. In contrast, traditional auditory training had been provided for the NTG during the same period. To evaluate whether a training effect was achieved, the two groups also carried out the same tests as the baseline after completing the experiment. RESULTS: The results showed that performance on the consonant and sentence tests in the TG was significantly increased compared with that of the NTG. Also, improved scores of speech perception were retained at 2 weeks after the training was completed. However, vowel scores were not changed after the 4-week training in both the TG and the NTG. CONCLUSIONS: This result pattern suggests that a moderate amount of auditory training using the mobile device with cost-effective and minimal supervision is useful when it is used to improve the speech understanding of older adults with hearing loss. Geriatr Gerontol Int 2017; 17: 61-68.


Asunto(s)
Audífonos , Pérdida Auditiva Sensorineural/fisiopatología , Pérdida Auditiva Sensorineural/rehabilitación , Aplicaciones Móviles , Percepción del Habla/fisiología , Factores de Edad , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Masculino , República de Corea
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...