Speech perception using combinations of auditory, visual, and tactile information.
J Rehabil Res Dev
; 26(1): 15-24, 1989.
Article
em En
| MEDLINE
| ID: mdl-2521904
Four normally-hearing subjects were trained and tested with all combinations of a highly-degraded auditory input, a visual input via lipreading, and a tactile input using a multichannel electrotactile speech processor. The speech perception of the subjects was assessed with closed sets of vowels, consonants, and multisyllabic words; with open sets of words and sentences, and with speech tracking. When the visual input was added to any combination of other inputs, a significant improvement occurred for every test. Similarly, the auditory input produced a significant improvement for all tests except closed-set vowel recognition. The tactile input produced scores that were significantly greater than chance in isolation, but combined less effectively with the other modalities. The addition of the tactile input did produce significant improvements for vowel recognition in the auditory-tactile condition, for consonant recognition in the auditory-tactile and visual-tactile conditions, and in open-set word recognition in the visual-tactile condition. Information transmission analysis of the features of vowels and consonants indicated that the information from auditory and visual inputs were integrated much more effectively than information from the tactile input. The less effective combination might be due to lack of training with the tactile input, or to more fundamental limitations in the processing of multimodal stimuli.
Buscar no Google
Base de dados:
MEDLINE
Assunto principal:
Reabilitação
/
Tecnologia Assistiva
/
Percepção da Fala
/
Auxiliares de Comunicação para Pessoas com Deficiência
/
Métodos de Comunicação Total
Limite:
Adult
/
Female
/
Humans
Idioma:
En
Revista:
J Rehabil Res Dev
Assunto da revista:
ENGENHARIA BIOMEDICA
/
REABILITACAO
Ano de publicação:
1989
Tipo de documento:
Article
País de afiliação:
Austrália