RESUMO
In a previous paper, the authors built a neural network model to recognize Japanese sign language syllabary or yubimoji. One of the problems encountered in that study was the accurate digital representation and distinction of similar yubimoji gestures, i.e. gestures with the same finger flexure positions but with different hand/finger orientations. This study focuses on these yubimoji gestures. Using data from a glove interface with bend sensors and accelerometers, a neural network was built, trained and tested. The network performed well and good results were obtained.
Assuntos
Auxiliares de Comunicação para Pessoas com Deficiência , Gestos , Redes Neurais de Computação , Língua de Sinais , Dedos , Humanos , Japão , Reprodutibilidade dos Testes , Processamento de Sinais Assistido por Computador , Interface Usuário-Computador , VocabulárioRESUMO
Effective communication with the hearing and speech impaired often requires at least a basic working knowledge of sign language gestures, without which a memo pad and pen, or a mobile phone's notepad is indispensable. The aim of this study was to build a neural network that could be used to recognize static finger-hand gestures of the yubimoji, the Japanese sign language syllabary. To build the network, signal inputs from a data glove interface were taken for each of the static yubimoji gestures. The network was trained and tested 10 times using a multilayer perceptron model. Overall, only 18 of the 41 static gestures were successfully recognized. One of the reasons was attributed to the inability of the data glove to measure gesture directions particularly for yubimoji gestures with similar finger configurations. Future work will focus on these problems as well as the inclusion of dynamic yubimoji gestures.