Your browser doesn't support javascript.
loading
Human-Computer Interaction with Hand Gesture Recognition Using ResNet and MobileNet.
Alnuaim, Abeer; Zakariah, Mohammed; Hatamleh, Wesam Atef; Tarazi, Hussam; Tripathi, Vikas; Amoatey, Enoch Tetteh.
Afiliação
  • Alnuaim A; Department of Computer Science and Engineering, College of Applied Studies and Community Services King Saud University, P.O. Box 22459, Riyadh 11495, Saudi Arabia.
  • Zakariah M; Department of Computer Science, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia.
  • Hatamleh WA; Department of Computer Science, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia.
  • Tarazi H; Department of Computer Science and Informatics, School of Engineering and Computer Science, Oakland University, 318 Meadow Brook Rd, Rochester, MI 48309, USA.
  • Tripathi V; Department of Computer Science & Engineering, College of Graphic Era Deemed to be University, Dehradun, Uttarakhand, India.
  • Amoatey ET; School of Engineering, University for Development Studies, Tamale, Ghana.
Comput Intell Neurosci ; 2022: 8777355, 2022.
Article em En | MEDLINE | ID: mdl-35378817
ABSTRACT
Sign language is the native language of deaf people, which they use in their daily life, and it facilitates the communication process between deaf people. The problem faced by deaf people is targeted using sign language technique. Sign language refers to the use of the arms and hands to communicate, particularly among those who are deaf. This varies depending on the person and the location from which they come. As a result, there is no standardization about the sign language to be used; for example, American, British, Chinese, and Arab sign languages are all distinct. Here, in this study we trained a model, which will be able to classify the Arabic sign language, which consists of 32 Arabic alphabet sign classes. In images, sign language is detected through the pose of the hand. In this study, we proposed a framework, which consists of two CNN models, and each of them is individually trained on the training set. The final predictions of the two models were ensembled to achieve higher results. The dataset used in this study is released in 2019 and is called as ArSL2018. It is launched at the Prince Mohammad Bin Fahd University, Al Khobar, Saudi Arabia. The main contribution in this study is resizing the images to 64 ∗ 64 pixels, converting from grayscale images to three-channel images, and then applying the median filter to the images, which acts as lowpass filtering in order to smooth the images and reduce noise and to make the model more robust to avoid overfitting. Then, the preprocessed image is fed into two different models, which are ResNet50 and MobileNetV2. ResNet50 and MobileNetV2 architectures were implemented together. The results we achieved on the test set for the whole data are with an accuracy of about 97% after applying many preprocessing techniques and different hyperparameters for each model, and also different data augmentation techniques.
Assuntos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Auxiliares de Comunicação para Pessoas com Deficiência / Gestos Tipo de estudo: Prognostic_studies Limite: Humans País/Região como assunto: America do norte Idioma: En Revista: Comput Intell Neurosci Assunto da revista: INFORMATICA MEDICA / NEUROLOGIA Ano de publicação: 2022 Tipo de documento: Article País de afiliação: Arábia Saudita País de publicação: EEUU / ESTADOS UNIDOS / ESTADOS UNIDOS DA AMERICA / EUA / UNITED STATES / UNITED STATES OF AMERICA / US / USA

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Auxiliares de Comunicação para Pessoas com Deficiência / Gestos Tipo de estudo: Prognostic_studies Limite: Humans País/Região como assunto: America do norte Idioma: En Revista: Comput Intell Neurosci Assunto da revista: INFORMATICA MEDICA / NEUROLOGIA Ano de publicação: 2022 Tipo de documento: Article País de afiliação: Arábia Saudita País de publicação: EEUU / ESTADOS UNIDOS / ESTADOS UNIDOS DA AMERICA / EUA / UNITED STATES / UNITED STATES OF AMERICA / US / USA