Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 755
Filtrar
1.
Neuroimage ; : 120720, 2024 Jul 04.
Artículo en Inglés | MEDLINE | ID: mdl-38971484

RESUMEN

This meta-analysis summarizes evidence from 44 neuroimaging experiments and characterizes the general linguistic network in early deaf individuals. Meta-analytic comparisons with hearing individuals found that a specific set of regions (in particular the left inferior frontal gyrus and posterior middle temporal gyrus) participates in supramodal language processing. In addition to previously described modality-specific differences, the present study showed that the left calcarine gyrus and the right caudate were additionally recruited in deaf compared with hearing individuals. In addition, this study showed that the bilateral posterior superior temporal gyrus is shaped by cross-modal plasticity, whereas the left frontotemporal areas are shaped by early language experience. Although an overall left-lateralized pattern for language processing was observed in the early deaf individuals, regional lateralization was altered in the inferior temporal gyrus and anterior temporal lobe. These findings indicate that the core language network functions in a modality-independent manner, and provide a foundation for determining the contributions of sensory and linguistic experiences in shaping the neural bases of language processing.

2.
Front Artif Intell ; 7: 1297347, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38957453

RESUMEN

Addressing the increasing demand for accessible sign language learning tools, this paper introduces an innovative Machine Learning-Driven Web Application dedicated to Sign Language Learning. This web application represents a significant advancement in sign language education. Unlike traditional approaches, the application's unique methodology involves assigning users different words to spell. Users are tasked with signing each letter of the word, earning a point upon correctly signing the entire word. The paper delves into the development, features, and the machine learning framework underlying the application. Developed using HTML, CSS, JavaScript, and Flask, the web application seamlessly accesses the user's webcam for a live video feed, displaying the model's predictions on-screen to facilitate interactive practice sessions. The primary aim is to provide a learning platform for those who are not familiar with sign language, offering them the opportunity to acquire this essential skill and fostering inclusivity in the digital age.

3.
PeerJ Comput Sci ; 10: e2063, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38983191

RESUMEN

Lack of an effective early sign language learning framework for a hard-of-hearing population can have traumatic consequences, causing social isolation and unfair treatment in workplaces. Alphabet and digit detection methods have been the basic framework for early sign language learning but are restricted by performance and accuracy, making it difficult to detect signs in real life. This article proposes an improved sign language detection method for early sign language learners based on the You Only Look Once version 8.0 (YOLOv8) algorithm, referred to as the intelligent sign language detection system (iSDS), which exploits the power of deep learning to detect sign language-distinct features. The iSDS method could overcome the false positive rates and improve the accuracy as well as the speed of sign language detection. The proposed iSDS framework for early sign language learners consists of three basic steps: (i) image pixel processing to extract features that are underrepresented in the frame, (ii) inter-dependence pixel-based feature extraction using YOLOv8, (iii) web-based signer independence validation. The proposed iSDS enables faster response times and reduces misinterpretation and inference delay time. The iSDS achieved state-of-the-art performance of over 97% for precision, recall, and F1-score with the best mAP of 87%. The proposed iSDS method has several potential applications, including continuous sign language detection systems and intelligent web-based sign recognition systems.

4.
Cognition ; 251: 105878, 2024 Jul 17.
Artículo en Inglés | MEDLINE | ID: mdl-39024841

RESUMEN

This study investigated Cantonese and Hong Kong Sign Language (HKSL) phonological activation patterns in Hong Kong deaf readers using the ERP technique. Two experiments employing the error disruption paradigm were conducted while recording participants' EEGs. Experiment 1 focused on orthographic and speech-based phonological processing, while Experiment 2 examined sign-phonological processing. ERP analyses focused on the P200 (180-220 ms) and N400 (300-500 ms) components. The results of Experiment 1 showed that hearing readers exhibited both orthographic and phonological effects in the P200 and N400 windows, consistent with previous studies on Chinese reading. In deaf readers, significant speech-based phonological effects were observed in the P200 window, and orthographic effects spanned both the P200 and N400 windows. Comparative analysis between the two groups revealed distinct spatial distributions for orthographic and speech-based phonological ERP effects, which may indicate the engagement of different neural networks during early processing stages. Experiment 2 found evidence of sign-phonological activation in both the P200 and N400 windows among deaf readers, which may reflect the involvement of sign-phonological representations in early lexical access and later semantic integration. Furthermore, exploratory analysis revealed that higher reading fluency in deaf readers correlated with stronger orthographic effects in the P200 window and diminished effects in the N400 window, indicating that efficient orthographic processing during early lexical access is a distinguishing feature of proficient deaf readers.

5.
Hear Res ; 451: 109074, 2024 Jul 03.
Artículo en Inglés | MEDLINE | ID: mdl-39018768

RESUMEN

Many children with profound hearing loss have received cochlear implants (CI) to help restore some sense of hearing. There is, however, limited research on long-term neurocognitive outcomes in young adults who have grown up hearing through a CI. This study compared the cognitive outcomes of early-implanted (n = 20) and late-implanted (n = 21) young adult CI users, and typically hearing (TH) controls (n=56), all of whom were enrolled in college. Cognitive fluidity, nonverbal intelligence, and American Sign Language (ASL) comprehension were assessed, revealing no significant differences in cognition and nonverbal intelligence between the early and late-implanted groups. However, there was a difference in ASL comprehension, with the late-implanted group having significantly higher ASL comprehension. Although young adult CI users showed significantly lower scores in a working memory and processing speed task than TH age-matched controls, there were no significant differences in tasks involving executive function shifting, inhibitory control, and episodic memory between young adult CI and young adult TH participants. In an exploratory analysis of a subset of CI participants (n = 17) in whom we were able to examine crossmodal plasticity, we saw greater evidence of crossmodal recruitment from the visual system in late-implanted compared with early-implanted CI young adults. However, cortical visual evoked potential latency biomarkers of crossmodal plasticity were not correlated with cognitive measures or ASL comprehension. The results suggest that in the late-implanted CI users, early access to sign language may have served as a scaffold for appropriate cognitive development, while in the early-implanted group early access to oral language benefited cognitive development. Furthermore, our results suggest that the persistence of crossmodal neuroplasticity into adulthood does not necessarily impact cognitive development. In conclusion, early access to language - spoken or signed - may be important for cognitive development, with no observable effect of crossmodal plasticity on cognitive outcomes.

6.
Artículo en Inglés | MEDLINE | ID: mdl-39010653

RESUMEN

Flexible strain sensors have been widely researched in fields such as smart wearables, human health monitoring, and biomedical applications. However, achieving a wide sensing range and high sensitivity of flexible strain sensors simultaneously remains a challenge, limiting their further applications. To address these issues, a cross-scale combinatorial bionic hierarchical design featuring microscale morphology combined with a macroscale base to balance the sensing range and sensitivity is presented. Inspired by the combination of serpentine and butterfly wing structures, this study employs three-dimensional printing, prestretching, and mold transfer processes to construct a combinatorial bionic hierarchical flexible strain sensor (CBH-sensor) with serpentine-shaped inverted-V-groove/wrinkling-cracking structures. The CBH-sensor has a high wide sensing range of 150% and high sensitivity with a gauge factor of up to 2416.67. In addition, it demonstrates the application of the CBH-sensor array in sign language gesture recognition, successfully identifying nine different sign language gestures with an impressive accuracy of 100% with the assistance of machine learning. The CBH-sensor exhibits considerable promise for use in enabling unobstructed communication between individuals who use sign language and those who do not. Furthermore, it has wide-ranging possibilities for use in the field of gesture-driven interactions in human-computer interfaces.

7.
Int Arch Otorhinolaryngol ; 28(3): e517-e522, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38974642

RESUMEN

Introduction The World Health Organization (WHO) estimates that ∼ 32 million children worldwide are affected by hearing loss (HL). Cochlear implant is the first-line treatment for severe to profound sensorineural HL. It is considered one of the most successful prostheses developed to date. Objective To evaluate the oral language development of pediatric patients with prelingual deafness implanted in a reference hospital for the treatment of HL in southern Brazil. Methods We conducted a retrospective cohort study with a review of medical records of patients undergoing cochlear implant surgery between January 2009 and December 2018. Language development was assessed by reviewing consultations with speech therapy professionals from the cochlear implant group. Results A total of 152 children were included in the study. The mean age at cochlear implant surgery was of 41 months (standard deviation [SD]: ± 15). The patients were divided into six groups considering the type of language most used in their daily lives. We found that 36% of children use oral language as their primary form of communication. In a subanalysis, we observed that patients with developed or developing oral language had undergone cochlear implant surgery earlier than patients using Brazilian Sign Language (Língua Brasileira de Sinais, LIBRAS, in Portuguese) or those without developed language. Conclusion The cochlear implant is a state-of-the-art technology that enables the re-establishment of the sense of hearing and the development of oral language. However, language development is a complex process known to present a critical period to properly occur. We still see many patients receiving late diagnosis and treatment, which implies a delay and, often, the impossibility of developing oral communication. Level of Evidence Level 3 (cohort study).

8.
J Imaging ; 10(6)2024 Jun 20.
Artículo en Inglés | MEDLINE | ID: mdl-38921626

RESUMEN

Sign language recognition technology can help people with hearing impairments to communicate with non-hearing-impaired people. At present, with the rapid development of society, deep learning also provides certain technical support for sign language recognition work. In sign language recognition tasks, traditional convolutional neural networks used to extract spatio-temporal features from sign language videos suffer from insufficient feature extraction, resulting in low recognition rates. Nevertheless, a large number of video-based sign language datasets require a significant amount of computing resources for training while ensuring the generalization of the network, which poses a challenge for recognition. In this paper, we present a video-based sign language recognition method based on Residual Network (ResNet) and Long Short-Term Memory (LSTM). As the number of network layers increases, the ResNet network can effectively solve the granularity explosion problem and obtain better time series features. We use the ResNet convolutional network as the backbone model. LSTM utilizes the concept of gates to control unit states and update the output feature values of sequences. ResNet extracts the sign language features. Then, the learned feature space is used as the input of the LSTM network to obtain long sequence features. It can effectively extract the spatio-temporal features in sign language videos and improve the recognition rate of sign language actions. An extensive experimental evaluation demonstrates the effectiveness and superior performance of the proposed method, with an accuracy of 85.26%, F1-score of 84.98%, and precision of 87.77% on Argentine Sign Language (LSA64).

9.
Sensors (Basel) ; 24(11)2024 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-38894473

RESUMEN

Sign language is an essential means of communication for individuals with hearing disabilities. However, there is a significant shortage of sign language interpreters in some languages, especially in Saudi Arabia. This shortage results in a large proportion of the hearing-impaired population being deprived of services, especially in public places. This paper aims to address this gap in accessibility by leveraging technology to develop systems capable of recognizing Arabic Sign Language (ArSL) using deep learning techniques. In this paper, we propose a hybrid model to capture the spatio-temporal aspects of sign language (i.e., letters and words). The hybrid model consists of a Convolutional Neural Network (CNN) classifier to extract spatial features from sign language data and a Long Short-Term Memory (LSTM) classifier to extract spatial and temporal characteristics to handle sequential data (i.e., hand movements). To demonstrate the feasibility of our proposed hybrid model, we created a dataset of 20 different words, resulting in 4000 images for ArSL: 10 static gesture words and 500 videos for 10 dynamic gesture words. Our proposed hybrid model demonstrates promising performance, with the CNN and LSTM classifiers achieving accuracy rates of 94.40% and 82.70%, respectively. These results indicate that our approach can significantly enhance communication accessibility for the hearing-impaired community in Saudi Arabia. Thus, this paper represents a major step toward promoting inclusivity and improving the quality of life for the hearing impaired.


Asunto(s)
Aprendizaje Profundo , Redes Neurales de la Computación , Lengua de Signos , Humanos , Arabia Saudita , Lenguaje , Gestos
10.
Neurobiol Lang (Camb) ; 5(2): 553-588, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38939730

RESUMEN

We examined the impact of exposure to a signed language (American Sign Language, or ASL) at different ages on the neural systems that support spoken language phonemic discrimination in deaf individuals with cochlear implants (CIs). Deaf CI users (N = 18, age = 18-24 yrs) who were exposed to a signed language at different ages and hearing individuals (N = 18, age = 18-21 yrs) completed a phonemic discrimination task in a spoken native (English) and non-native (Hindi) language while undergoing functional near-infrared spectroscopy neuroimaging. Behaviorally, deaf CI users who received a CI early versus later in life showed better English phonemic discrimination, albeit phonemic discrimination was poor relative to hearing individuals. Importantly, the age of exposure to ASL was not related to phonemic discrimination. Neurally, early-life language exposure, irrespective of modality, was associated with greater neural activation of left-hemisphere language areas critically involved in phonological processing during the phonemic discrimination task in deaf CI users. In particular, early exposure to ASL was associated with increased activation in the left hemisphere's classic language regions for native versus non-native language phonemic contrasts for deaf CI users who received a CI later in life. For deaf CI users who received a CI early in life, the age of exposure to ASL was not related to neural activation during phonemic discrimination. Together, the findings suggest that early signed language exposure does not negatively impact spoken language processing in deaf CI users, but may instead potentially offset the negative effects of language deprivation that deaf children without any signed language exposure experience prior to implantation. This empirical evidence aligns with and lends support to recent perspectives regarding the impact of ASL exposure in the context of CI usage.

11.
PeerJ Comput Sci ; 10: e2054, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38855212

RESUMEN

This article presents an innovative approach for the task of isolated sign language recognition (SLR); this approach centers on the integration of pose data with motion history images (MHIs) derived from these data. Our research combines spatial information obtained from body, hand, and face poses with the comprehensive details provided by three-channel MHI data concerning the temporal dynamics of the sign. Particularly, our developed finger pose-based MHI (FP-MHI) feature significantly enhances the recognition success, capturing the nuances of finger movements and gestures, unlike existing approaches in SLR. This feature improves the accuracy and reliability of SLR systems by more accurately capturing the fine details and richness of sign language. Additionally, we enhance the overall model accuracy by predicting missing pose data through linear interpolation. Our study, based on the randomized leaky rectified linear unit (RReLU) enhanced ResNet-18 model, successfully handles the interaction between manual and non-manual features through the fusion of extracted features and classification with a support vector machine (SVM). This innovative integration demonstrates competitive and superior results compared to current methodologies in the field of SLR across various datasets, including BosphorusSign22k-general, BosphorusSign22k, LSA64, and GSL, in our experiments.

12.
Acta Neurochir (Wien) ; 166(1): 260, 2024 Jun 10.
Artículo en Inglés | MEDLINE | ID: mdl-38858238

RESUMEN

The aim of this case study was to describe differences in English and British Sign Language (BSL) communication caused by a left temporal tumour resulting in discordant presentation of symptoms, intraoperative stimulation mapping during awake craniotomy and post-operative language abilities. We report the first case of a hearing child of deaf adults, who acquired BSL with English as a second language. The patient presented with English word finding difficulty, phonemic paraphasias, and reading and writing challenges, with BSL preserved. Intraoperatively, object naming and semantic fluency tasks were performed in English and BSL, revealing differential language maps for each modality. Post-operative assessment confirmed mild dysphasia for English with BSL preserved. These findings suggest that in hearing people who acquire a signed language as a first language, topographical organisation may differ to that of a second, spoken, language.


Asunto(s)
Neoplasias Encefálicas , Craneotomía , Glioblastoma , Lengua de Signos , Lóbulo Temporal , Humanos , Glioblastoma/cirugía , Craneotomía/métodos , Neoplasias Encefálicas/cirugía , Neoplasias Encefálicas/complicaciones , Neoplasias Encefálicas/diagnóstico por imagen , Lóbulo Temporal/cirugía , Lóbulo Temporal/diagnóstico por imagen , Mapeo Encefálico/métodos , Masculino , Vigilia/fisiología , Habla/fisiología , Multilingüismo , Lenguaje , Adulto
13.
Biomed Tech (Berl) ; 2024 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-38826069

RESUMEN

OBJECTIVES: The objective of this study is to develop a system for automatic sign language recognition to improve the quality of life for the mute-deaf community in Egypt. The system aims to bridge the communication gap by identifying and converting right-hand gestures into audible sounds or displayed text. METHODS: To achieve the objectives, a convolutional neural network (CNN) model is employed. The model is trained to recognize right-hand gestures captured by an affordable web camera. A dataset was created with the help of six volunteers for training, testing, and validation purposes. RESULTS: The proposed system achieved an impressive average accuracy of 99.65 % in recognizing right-hand gestures, with high precision value of 95.11 %. The system effectively addressed the issue of gesture similarity between certain alphabets by successfully distinguishing between their respective gestures. CONCLUSIONS: The proposed system offers a promising solution for automatic sign language recognition, benefiting the mute-deaf community in Egypt. By accurately identifying and converting right-hand gestures, the system facilitates communication and interaction with the wider world. This technology has the potential to greatly enhance the quality of life for individuals who are unable to speak or hear, promoting inclusivity and accessibility.

15.
Cognition ; 249: 105811, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38776621

RESUMEN

Adults with no knowledge of sign languages can perceive distinctive markers that signal event boundedness (telicity), suggesting that telicity is a cognitively natural semantic feature that can be marked iconically (Strickland et al., 2015). This study asks if non-signing children (5-year-olds) can also link telicity to iconic markers in sign. Experiment 1 attempted three close replications of Strickland et al. (2015) and found only limited success. However, Experiment 2 showed that children can both perceive the relevant visual feature and can succeed at linking the visual property to telicity semantics when allowed to filter their answer through their own linguistic choices. Children's performance demonstrates the cognitive naturalness and early availability of the semantics of telicity, supporting the idea that telicity helps guide the language acquisition process.


Asunto(s)
Lengua de Signos , Humanos , Masculino , Femenino , Preescolar , Semántica , Desarrollo del Lenguaje
16.
J Appl Behav Anal ; 57(3): 657-667, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38742862

RESUMEN

Multiple-baseline-across-word-sets designs were used to determine whether a computer-based intervention would enhance accurate word signing with four participants. Each participant was a hearing college student with reading disorders. Learning trials included 3 s to observe printed words on the screen and a video model performing the sign twice (i.e., simultaneous prompting), 3 s to make the sign, 3 s to observe the same clip, and 3 s to make the sign again. For each participant and word set, no words were accurately signed during baseline. After the intervention, all four participants increased their accurate word signing across all three word sets, providing 12 demonstrations of experimental control. For each participant, accurate word signing was maintained. Application of efficient, technology-based, simultaneous prompting interventions for enhancing American Sign Language learning and future research designed to investigate causal mechanisms and optimize intervention effects are discussed.


Asunto(s)
Dislexia , Lengua de Signos , Humanos , Masculino , Dislexia/rehabilitación , Dislexia/terapia , Femenino , Instrucción por Computador/métodos , Adulto Joven , Aprendizaje , Estudiantes/psicología
17.
MedEdPORTAL ; 20: 11396, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38722734

RESUMEN

Introduction: People with disabilities and those with non-English language preferences have worse health outcomes than their counterparts due to barriers to communication and poor continuity of care. As members of both groups, people who are Deaf users of American Sign Language have compounded health disparities. Provider discomfort with these specific demographics is a contributing factor, often stemming from insufficient training in medical programs. To help address these health disparities, we created a session on disability, language, and communication for undergraduate medical students. Methods: This 2-hour session was developed as a part of a 2020 curriculum shift for a total of 404 second-year medical student participants. We utilized a retrospective postsession survey to analyze learning objective achievement through a comparison of medians using the Wilcoxon signed rank test (α = .05) for the first 2 years of course implementation. Results: When assessing 158 students' self-perceived abilities to perform each of the learning objectives, students reported significantly higher confidence after the session compared to their retrospective presession confidence for all four learning objectives (ps < .001, respectively). Responses signifying learning objective achievement (scores of 4, probably yes, or 5, definitely yes), when averaged across the first 2 years of implementation, increased from 73% before the session to 98% after the session. Discussion: Our evaluation suggests medical students could benefit from increased educational initiatives on disability culture and health disparities caused by barriers to communication, to strengthen cultural humility, the delivery of health care, and, ultimately, health equity.


Asunto(s)
Curriculum , Toma de Decisiones Conjunta , Personas con Discapacidad , Educación de Pregrado en Medicina , Estudiantes de Medicina , Humanos , Estudiantes de Medicina/psicología , Estudiantes de Medicina/estadística & datos numéricos , Estudios Retrospectivos , Educación de Pregrado en Medicina/métodos , Barreras de Comunicación , Encuestas y Cuestionarios , Masculino , Femenino , Lengua de Signos , Lenguaje
18.
Heliyon ; 10(9): e29678, 2024 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-38699011

RESUMEN

Speech and hearing impairments are among the most common problems in Indian societies. It can affect anyone, whether children, adults, or more. Many different treatments can help to overcome hearing problems. Different types of hearing aids and cochlear implants help amplify sounds for better hearing. The type of language known as sign language is very scientific and has its grammar and syntax. Still, due to a need for more awareness among hard-of-hearing persons, they need to be made familiar with the institutions where they can learn and equip themselves for communication. This paper describes an approach to aid speech and hard-of-hearing persons so that they are free to communicate with persons who do not have speech and hearing disabilities based on the Indian Sign Language System. To find an appropriate solution, there is a need to develop a system that can act as an interpreter for speech and hard-of-hearing persons. The interpreter system is designed with the help of the Robotic hand model and is programmed using Raspberry Pi 4. Based on the experimental results, it can be observed that the robotic hands generated different signs of the alphabet corresponding to the speech commands uttered by an individual. Several experimental trials were conducted by ten persons who do not have any hearing disabilities. The results of the five experimental trials are shown in this paper. The estimation of performance parameters and statistical analysis are also carried out to analyze better and interpret the experimental results. Based on the experimental results, the proposed robotic hand interpreter system model accurately generates gestures corresponding to different alphabets used in the Indian Sign Language system, yielding an overall accuracy of 94 percent.

19.
Biosensors (Basel) ; 14(5)2024 May 03.
Artículo en Inglés | MEDLINE | ID: mdl-38785701

RESUMEN

At the heart of the non-implantable electronic revolution lies ionogels, which are remarkably conductive, thermally stable, and even antimicrobial materials. Yet, their potential has been hindered by poor mechanical properties. Herein, a double network (DN) ionogel crafted from 1-Ethyl-3-methylimidazolium chloride ([Emim]Cl), acrylamide (AM), and polyvinyl alcohol (PVA) was constructed. Tensile strength, fracture elongation, and conductivity can be adjusted across a wide range, enabling researchers to fabricate the material to meet specific needs. With adjustable mechanical properties, such as tensile strength (0.06-5.30 MPa) and fracture elongation (363-1373%), this ionogel possesses both robustness and flexibility. This ionogel exhibits a bi-modal response to temperature and strain, making it an ideal candidate for strain sensor applications. It also functions as a flexible strain sensor that can detect physiological signals in real time, opening doors to personalized health monitoring and disease management. Moreover, these gels' ability to decode the intricate movements of sign language paves the way for improved communication accessibility for the deaf and hard-of-hearing community. This DN ionogel lays the foundation for a future in which e-skins and wearable sensors will seamlessly integrate into our lives, revolutionizing healthcare, human-machine interaction, and beyond.


Asunto(s)
Lengua de Signos , Humanos , Alcohol Polivinílico/química , Monitoreo Fisiológico , Dispositivos Electrónicos Vestibles , Geles/química , Imidazoles/química , Técnicas Biosensibles , Acrilamida , Resistencia a la Tracción
20.
Sensors (Basel) ; 24(10)2024 May 14.
Artículo en Inglés | MEDLINE | ID: mdl-38793964

RESUMEN

Deaf and hard-of-hearing people mainly communicate using sign language, which is a set of signs made using hand gestures combined with facial expressions to make meaningful and complete sentences. The problem that faces deaf and hard-of-hearing people is the lack of automatic tools that translate sign languages into written or spoken text, which has led to a communication gap between them and their communities. Most state-of-the-art vision-based sign language recognition approaches focus on translating non-Arabic sign languages, with few targeting the Arabic Sign Language (ArSL) and even fewer targeting the Saudi Sign Language (SSL). This paper proposes a mobile application that helps deaf and hard-of-hearing people in Saudi Arabia to communicate efficiently with their communities. The prototype is an Android-based mobile application that applies deep learning techniques to translate isolated SSL to text and audio and includes unique features that are not available in other related applications targeting ArSL. The proposed approach, when evaluated on a comprehensive dataset, has demonstrated its effectiveness by outperforming several state-of-the-art approaches and producing results that are comparable to these approaches. Moreover, testing the prototype on several deaf and hard-of-hearing users, in addition to hearing users, proved its usefulness. In the future, we aim to improve the accuracy of the model and enrich the application with more features.


Asunto(s)
Aprendizaje Profundo , Lengua de Signos , Humanos , Arabia Saudita , Aplicaciones Móviles , Sordera/fisiopatología , Personas con Deficiencia Auditiva
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA