Your browser doesn't support javascript.
loading
Real-Time Arabic Sign Language Recognition Using a Hybrid Deep Learning Model.
Noor, Talal H; Noor, Ayman; Alharbi, Ahmed F; Faisal, Ahmed; Alrashidi, Rakan; Alsaedi, Ahmed S; Alharbi, Ghada; Alsanoosy, Tawfeeq; Alsaeedi, Abdullah.
Afiliación
  • Noor TH; Department of Computer Science, College of Computer Science and Engineering, Taibah University, Madinah 42353, Saudi Arabia.
  • Noor A; Department of Computer Science, College of Computer Science and Engineering, Taibah University, Madinah 42353, Saudi Arabia.
  • Alharbi AF; Department of Computer Science, College of Computer Science and Engineering, Taibah University, Madinah 42353, Saudi Arabia.
  • Faisal A; Department of Computer Science, College of Computer Science and Engineering, Taibah University, Madinah 42353, Saudi Arabia.
  • Alrashidi R; Department of Computer Science, College of Computer Science and Engineering, Taibah University, Madinah 42353, Saudi Arabia.
  • Alsaedi AS; Department of Computer Science, College of Computer Science and Engineering, Taibah University, Madinah 42353, Saudi Arabia.
  • Alharbi G; Department of Computer Science, College of Computer Science and Engineering, Taibah University, Madinah 42353, Saudi Arabia.
  • Alsanoosy T; Department of Computer Science, College of Computer Science and Engineering, Taibah University, Madinah 42353, Saudi Arabia.
  • Alsaeedi A; Department of Computer Science, College of Computer Science and Engineering, Taibah University, Madinah 42353, Saudi Arabia.
Sensors (Basel) ; 24(11)2024 Jun 06.
Article en En | MEDLINE | ID: mdl-38894473
ABSTRACT
Sign language is an essential means of communication for individuals with hearing disabilities. However, there is a significant shortage of sign language interpreters in some languages, especially in Saudi Arabia. This shortage results in a large proportion of the hearing-impaired population being deprived of services, especially in public places. This paper aims to address this gap in accessibility by leveraging technology to develop systems capable of recognizing Arabic Sign Language (ArSL) using deep learning techniques. In this paper, we propose a hybrid model to capture the spatio-temporal aspects of sign language (i.e., letters and words). The hybrid model consists of a Convolutional Neural Network (CNN) classifier to extract spatial features from sign language data and a Long Short-Term Memory (LSTM) classifier to extract spatial and temporal characteristics to handle sequential data (i.e., hand movements). To demonstrate the feasibility of our proposed hybrid model, we created a dataset of 20 different words, resulting in 4000 images for ArSL 10 static gesture words and 500 videos for 10 dynamic gesture words. Our proposed hybrid model demonstrates promising performance, with the CNN and LSTM classifiers achieving accuracy rates of 94.40% and 82.70%, respectively. These results indicate that our approach can significantly enhance communication accessibility for the hearing-impaired community in Saudi Arabia. Thus, this paper represents a major step toward promoting inclusivity and improving the quality of life for the hearing impaired.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Lengua de Signos / Redes Neurales de la Computación / Aprendizaje Profundo Límite: Humans País/Región como asunto: Asia Idioma: En Revista: Sensors (Basel) Año: 2024 Tipo del documento: Article País de afiliación: Arabia Saudita

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Lengua de Signos / Redes Neurales de la Computación / Aprendizaje Profundo Límite: Humans País/Región como asunto: Asia Idioma: En Revista: Sensors (Basel) Año: 2024 Tipo del documento: Article País de afiliación: Arabia Saudita