Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 750
Filtrar
1.
MethodsX ; 13: 102901, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39247156

RESUMEN

Interaction and communication for normal human beings are easier than for a person with disabilities like speaking and hearing who may face communication problems with other people. Sign Language helps reduce this communication gap between a normal and disabled person. The prior solutions proposed using several deep learning techniques, such as Convolutional Neural Networks, Support Vector Machines, and K-Nearest Neighbors, have either demonstrated low accuracy or have not been implemented as real-time working systems. This system addresses both issues effectively. This work extends the difficulties faced while classifying the characters in Indian Sign Language(ISL). It can identify a total of 23 hand poses of the ISL. The system uses a pre-trained VGG16 Convolution Neural Network(CNN) with an attention mechanism. The model's training uses the Adam optimizer and cross-entropy loss function. The results demonstrate the effectiveness of transfer learning for ISL classification, achieving an accuracy of 97.5 % with VGG16 and 99.8 % with VGG16 plus attention mechanism.•Enabling quick and accurate sign language recognition with the help of trained model VGG16 with an attention mechanism.•The system does not require any external gloves or sensors, which helps to eliminate the need for physical sensors while simplifying the process with reduced costs.•Real-time processing makes the system more helpful for people with speaking and hearing disabilities, making it easier for them to communicate with other humans.

2.
J Commun Disord ; 111: 106454, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39142008

RESUMEN

This study explores the narrative skills of deaf and hearing children within the context of Arabic diglossia, a linguistic environment characterised by significant differences between spoken dialects and formal written language. Using Stein and Glenn's (1979) and Bruner's (1991) frameworks, the research analyses the narrative constructions of 13 hearing and 13 deaf children in Kuwait. The findings reveal that hearing children, benefiting from consistent exposure to spoken and formal Arabic, produced more coherent and detailed narratives compared to deaf children. Hearing participants also demonstrated greater vocabulary diversity. Age-related improvements in narrative skills were more pronounced among hearing children, while the impact of sign language exposure on narrative abilities was significant among deaf children. The study underscores the critical role of early language exposure and educational support in fostering narrative development, particularly in a diglossic context. These findings highlight the need for specialised educational strategies to support the unique narrative development needs of deaf children.


Asunto(s)
Sordera , Narración , Humanos , Niño , Masculino , Femenino , Sordera/psicología , Kuwait , Lengua de Signos , Preescolar , Lenguaje , Vocabulario , Personas con Deficiencia Auditiva/psicología
3.
Front Hum Neurosci ; 18: 1391531, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39099602

RESUMEN

Hand gestures are a natural and intuitive form of communication, and integrating this communication method into robotic systems presents significant potential to improve human-robot collaboration. Recent advances in motor neuroscience have focused on replicating human hand movements from synergies also known as movement primitives. Synergies, fundamental building blocks of movement, serve as a potential strategy adapted by the central nervous system to generate and control movements. Identifying how synergies contribute to movement can help in dexterous control of robotics, exoskeletons, prosthetics and extend its applications to rehabilitation. In this paper, 33 static hand gestures were recorded through a single RGB camera and identified in real-time through the MediaPipe framework as participants made various postures with their dominant hand. Assuming an open palm as initial posture, uniform joint angular velocities were obtained from all these gestures. By applying a dimensionality reduction method, kinematic synergies were obtained from these joint angular velocities. Kinematic synergies that explain 98% of variance of movements were utilized to reconstruct new hand gestures using convex optimization. Reconstructed hand gestures and selected kinematic synergies were translated onto a humanoid robot, Mitra, in real-time, as the participants demonstrated various hand gestures. The results showed that by using only few kinematic synergies it is possible to generate various hand gestures, with 95.7% accuracy. Furthermore, utilizing low-dimensional synergies in control of high dimensional end effectors holds promise to enable near-natural human-robot collaboration.

4.
Top Cogn Sci ; 2024 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-39190828

RESUMEN

Languages are neither designed in classrooms nor drawn from dictionaries-they are products of human minds and human interactions. However, it is challenging to understand how structure grows in these circumstances because generations of use and transmission shape and reshape the structure of the languages themselves. Laboratory studies on language emergence investigate the origins of language structure by requiring participants, prevented from using their own natural language(s), to create a novel communication system and then transmit it to others. Because the participants in these lab studies are already speakers of a language, it is easy to question the relevance of lab-based findings to the creation of natural language systems. Here, we take the findings from a lab-based language emergence paradigm and test whether the same pattern is also found in a new natural language: Nicaraguan Sign Language. We find evidence that signers of Nicaraguan Sign Language may show the same biases seen in lab-based language emergence studies: (1) they appear to condition word order based on the semantic dimension of intensionality and extensionality, and (2) they adjust this conditioning to satisfy language-internal order constraints. Our study adds to the small, but growing literature testing the relevance of lab-based studies to natural language birth, and provides convincing evidence that the biases seen in the lab play a role in shaping a brand new language.

5.
Sensors (Basel) ; 24(16)2024 Aug 19.
Artículo en Inglés | MEDLINE | ID: mdl-39205045

RESUMEN

Sign language is undoubtedly a common way of communication among deaf and non-verbal people. But it is not common among hearing people to use sign language to express feelings or share information in everyday life. Therefore, a significant communication gap exists between deaf and hearing individuals, despite both groups experiencing similar emotions and sentiments. In this paper, we developed a convolutional neural network-squeeze excitation network to predict the sign language signs and developed a smartphone application to provide access to the ML model to use it. The SE block provides attention to the channel of the image, thus improving the performance of the model. On the other hand, the smartphone application brings the ML model close to people so that everyone can benefit from it. In addition, we used the Shapley additive explanation to interpret the black box nature of the ML model and understand the models working from within. Using our ML model, we achieved an accuracy of 99.86% on the KU-BdSL dataset. The SHAP analysis shows that the model primarily relies on hand-related visual cues to predict sign language signs, aligning with human communication patterns.


Asunto(s)
Sordera , Aprendizaje Automático , Lengua de Signos , Humanos , Sordera/fisiopatología , Redes Neurales de la Computación , Teléfono Inteligente , Personas con Deficiencia Auditiva/psicología
6.
Neuropsychologia ; 204: 108973, 2024 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-39151687

RESUMEN

The goal of this study was to investigate the impact of the age of acquisition (AoA) on functional brain representations of sign language in two exceptional groups of hearing bimodal bilinguals: native signers (simultaneous bilinguals since early childhood) and late signers (proficient sequential bilinguals, who learnt a sign language after puberty). We asked whether effects of AoA would be present across languages - signed and audiovisual spoken - and thus observed only in late signers as they acquired each language at different life stages, and whether effects of AoA would be present during sign language processing across groups. Moreover, we aimed to carefully control participants' level of sign language proficiency by implementing a battery of language tests developed for the purpose of the project, which confirmed that participants had high competences of sign language. Between-group analyses revealed a hypothesized modulatory effect of AoA in the right inferior parietal lobule (IPL) in native signers, compared to late signers. With respect to within-group differences across languages we observed greater involvement of the left IPL in response to sign language in comparison to spoken language in both native and late signers, indicating language modality effects. Overall, our results suggest that the neural underpinnings of language are molded by the linguistic characteristics of the language as well as by when in life the language is learnt.

7.
Nutrients ; 16(16)2024 Aug 12.
Artículo en Inglés | MEDLINE | ID: mdl-39203801

RESUMEN

Profoundly hearing-impaired individuals lack health-promotion education on healthy lifestyles, and this may be due to communication barriers and limited awareness of available resources. Therefore, providing understandable healthy eating knowledge and a proper education evaluation via a questionnaire is vital. The present study aimed to translate, culturally adapt, and validate the content of a Saudi sign language version of the General Nutrition Knowledge Questionnaire (GNKQ). The study followed the World Health Organization guidelines for the translation and cultural adaptation of the GNKQ, using two-phase translation (from English into Arabic and then from Arabic into Saudi sign language), including forward-translation, back-translation, and pilot testing among profoundly hearing-impaired individuals. A total of 48 videos were recorded to present the GNKQ in Saudi sign language. The scale-level content validity index (S-CVI) value was equal to 0.96, and the item-level content validity index (I-CVI) value for all questions was between 1 and 0.9, except for question 6 in section 1, which was 0.6; this discrepancy was due to religious, social, and cultural traditions. The translation, cultural adaptation, and content validity of the Saudi sign language version of the GNKQ were satisfactory. Further studies are needed to validate other measurement properties of the present translated version of this questionnaire.


Asunto(s)
Conocimientos, Actitudes y Práctica en Salud , Lengua de Signos , Traducciones , Humanos , Encuestas y Cuestionarios/normas , Arabia Saudita , Femenino , Masculino , Reproducibilidad de los Resultados , Adulto , Persona de Mediana Edad , Traducción , Dieta Saludable
8.
Neural Netw ; 179: 106587, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39111160

RESUMEN

Continuous Sign Language Recognition (CSLR) is a task which converts a sign language video into a gloss sequence. The existing deep learning based sign language recognition methods usually rely on large-scale training data and rich supervised information. However, current sign language datasets are limited, and they are only annotated at sentence-level rather than frame-level. Inadequate supervision of sign language data poses a serious challenge for sign language recognition, which may result in insufficient training of sign language recognition models. To address above problems, we propose a cross-modal knowledge distillation method for continuous sign language recognition, which contains two teacher models and one student model. One of the teacher models is the Sign2Text dialogue teacher model, which takes a sign language video and a dialogue sentence as input and outputs the sign language recognition result. The other teacher model is the Text2Gloss translation teacher model, which targets to translate a text sentence into a gloss sequence. Both teacher models can provide information-rich soft labels to assist the training of the student model, which is a general sign language recognition model. We conduct extensive experiments on multiple commonly used sign language datasets, i.e., PHOENIX 2014T, CSL-Daily and QSL, the results show that the proposed cross-modal knowledge distillation method can effectively improve the sign language recognition accuracy by transferring multi-modal information from teacher models to the student model. Code is available at https://github.com/glq-1992/cross-modal-knowledge-distillation_new.


Asunto(s)
Aprendizaje Profundo , Lengua de Signos , Humanos , Redes Neurales de la Computación , Destilación/métodos
9.
Cognition ; 251: 105878, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39024841

RESUMEN

This study investigated Cantonese and Hong Kong Sign Language (HKSL) phonological activation patterns in Hong Kong deaf readers using the ERP technique. Two experiments employing the error disruption paradigm were conducted while recording participants' EEGs. Experiment 1 focused on orthographic and speech-based phonological processing, while Experiment 2 examined sign-phonological processing. ERP analyses focused on the P200 (180-220 ms) and N400 (300-500 ms) components. The results of Experiment 1 showed that hearing readers exhibited both orthographic and phonological effects in the P200 and N400 windows, consistent with previous studies on Chinese reading. In deaf readers, significant speech-based phonological effects were observed in the P200 window, and orthographic effects spanned both the P200 and N400 windows. Comparative analysis between the two groups revealed distinct spatial distributions for orthographic and speech-based phonological ERP effects, which may indicate the engagement of different neural networks during early processing stages. Experiment 2 found evidence of sign-phonological activation in both the P200 and N400 windows among deaf readers, which may reflect the involvement of sign-phonological representations in early lexical access and later semantic integration. Furthermore, exploratory analysis revealed that higher reading fluency in deaf readers correlated with stronger orthographic effects in the P200 window and diminished effects in the N400 window, indicating that efficient orthographic processing during early lexical access is a distinguishing feature of proficient deaf readers.


Asunto(s)
Sordera , Electroencefalografía , Potenciales Evocados , Multilingüismo , Lectura , Lengua de Signos , Humanos , Masculino , Potenciales Evocados/fisiología , Femenino , Hong Kong , Adulto , Adulto Joven , Sordera/fisiopatología , Fonética , Pueblos del Este de Asia
10.
Hear Res ; 451: 109074, 2024 09 15.
Artículo en Inglés | MEDLINE | ID: mdl-39018768

RESUMEN

Many children with profound hearing loss have received cochlear implants (CI) to help restore some sense of hearing. There is, however, limited research on long-term neurocognitive outcomes in young adults who have grown up hearing through a CI. This study compared the cognitive outcomes of early-implanted (n = 20) and late-implanted (n = 21) young adult CI users, and typically hearing (TH) controls (n=56), all of whom were enrolled in college. Cognitive fluidity, nonverbal intelligence, and American Sign Language (ASL) comprehension were assessed, revealing no significant differences in cognition and nonverbal intelligence between the early and late-implanted groups. However, there was a difference in ASL comprehension, with the late-implanted group having significantly higher ASL comprehension. Although young adult CI users showed significantly lower scores in a working memory and processing speed task than TH age-matched controls, there were no significant differences in tasks involving executive function shifting, inhibitory control, and episodic memory between young adult CI and young adult TH participants. In an exploratory analysis of a subset of CI participants (n = 17) in whom we were able to examine crossmodal plasticity, we saw greater evidence of crossmodal recruitment from the visual system in late-implanted compared with early-implanted CI young adults. However, cortical visual evoked potential latency biomarkers of crossmodal plasticity were not correlated with cognitive measures or ASL comprehension. The results suggest that in the late-implanted CI users, early access to sign language may have served as a scaffold for appropriate cognitive development, while in the early-implanted group early access to oral language benefited cognitive development. Furthermore, our results suggest that the persistence of crossmodal neuroplasticity into adulthood does not necessarily impact cognitive development. In conclusion, early access to language - spoken or signed - may be important for cognitive development, with no observable effect of crossmodal plasticity on cognitive outcomes.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Cognición , Comprensión , Plasticidad Neuronal , Personas con Deficiencia Auditiva , Humanos , Masculino , Adulto Joven , Femenino , Implantación Coclear/instrumentación , Personas con Deficiencia Auditiva/psicología , Personas con Deficiencia Auditiva/rehabilitación , Adulto , Estudios de Casos y Controles , Adolescente , Factores de Tiempo , Factores de Edad , Pruebas Neuropsicológicas , Memoria a Corto Plazo , Función Ejecutiva , Resultado del Tratamiento , Audición , Corrección de Deficiencia Auditiva/instrumentación
11.
Neuroimage ; 299: 120720, 2024 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-38971484

RESUMEN

This meta-analysis summarizes evidence from 44 neuroimaging experiments and characterizes the general linguistic network in early deaf individuals. Meta-analytic comparisons with hearing individuals found that a specific set of regions (in particular the left inferior frontal gyrus and posterior middle temporal gyrus) participates in supramodal language processing. In addition to previously described modality-specific differences, the present study showed that the left calcarine gyrus and the right caudate were additionally recruited in deaf compared with hearing individuals. In addition, this study showed that the bilateral posterior superior temporal gyrus is shaped by cross-modal plasticity, whereas the left frontotemporal areas are shaped by early language experience. Although an overall left-lateralized pattern for language processing was observed in the early deaf individuals, regional lateralization was altered in the inferior frontal gyrus and anterior temporal lobe. These findings indicate that the core language network functions in a modality-independent manner, and provide a foundation for determining the contributions of sensory and linguistic experiences in shaping the neural bases of language processing.


Asunto(s)
Sordera , Humanos , Sordera/diagnóstico por imagen , Sordera/fisiopatología , Neuroimagen/métodos , Red Nerviosa/diagnóstico por imagen , Mapeo Encefálico/métodos , Encéfalo/diagnóstico por imagen , Lenguaje , Lingüística
12.
Int Arch Otorhinolaryngol ; 28(3): e517-e522, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38974642

RESUMEN

Introduction The World Health Organization (WHO) estimates that ∼ 32 million children worldwide are affected by hearing loss (HL). Cochlear implant is the first-line treatment for severe to profound sensorineural HL. It is considered one of the most successful prostheses developed to date. Objective To evaluate the oral language development of pediatric patients with prelingual deafness implanted in a reference hospital for the treatment of HL in southern Brazil. Methods We conducted a retrospective cohort study with a review of medical records of patients undergoing cochlear implant surgery between January 2009 and December 2018. Language development was assessed by reviewing consultations with speech therapy professionals from the cochlear implant group. Results A total of 152 children were included in the study. The mean age at cochlear implant surgery was of 41 months (standard deviation [SD]: ± 15). The patients were divided into six groups considering the type of language most used in their daily lives. We found that 36% of children use oral language as their primary form of communication. In a subanalysis, we observed that patients with developed or developing oral language had undergone cochlear implant surgery earlier than patients using Brazilian Sign Language (Língua Brasileira de Sinais, LIBRAS, in Portuguese) or those without developed language. Conclusion The cochlear implant is a state-of-the-art technology that enables the re-establishment of the sense of hearing and the development of oral language. However, language development is a complex process known to present a critical period to properly occur. We still see many patients receiving late diagnosis and treatment, which implies a delay and, often, the impossibility of developing oral communication. Level of Evidence Level 3 (cohort study).

13.
PeerJ Comput Sci ; 10: e2063, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38983191

RESUMEN

Lack of an effective early sign language learning framework for a hard-of-hearing population can have traumatic consequences, causing social isolation and unfair treatment in workplaces. Alphabet and digit detection methods have been the basic framework for early sign language learning but are restricted by performance and accuracy, making it difficult to detect signs in real life. This article proposes an improved sign language detection method for early sign language learners based on the You Only Look Once version 8.0 (YOLOv8) algorithm, referred to as the intelligent sign language detection system (iSDS), which exploits the power of deep learning to detect sign language-distinct features. The iSDS method could overcome the false positive rates and improve the accuracy as well as the speed of sign language detection. The proposed iSDS framework for early sign language learners consists of three basic steps: (i) image pixel processing to extract features that are underrepresented in the frame, (ii) inter-dependence pixel-based feature extraction using YOLOv8, (iii) web-based signer independence validation. The proposed iSDS enables faster response times and reduces misinterpretation and inference delay time. The iSDS achieved state-of-the-art performance of over 97% for precision, recall, and F1-score with the best mAP of 87%. The proposed iSDS method has several potential applications, including continuous sign language detection systems and intelligent web-based sign recognition systems.

14.
Disasters ; : e12653, 2024 Jul 23.
Artículo en Inglés | MEDLINE | ID: mdl-39041381

RESUMEN

This study explores the South Korean Deaf community's response to sign language interpreting during the COVID-19 (coronavirus disease 2019) health crisis, focusing on individual factors affecting the signers' comprehension. The data were collected from a mobile-based questionnaire survey conducted among 401 Deaf adults; binary probit modelling was adopted to analyse the data. The major findings are: (i) 59.9 per cent of the respondents understood less than 70 per cent of the interpreting; (ii) males and urban residents tend to understand better; (iii) younger people (less than 50 years) and signers with a Bachelor's degree or higher are likely to have lower comprehension; and (iv) Deaf adults who visited a doctor after the COVID-19 outbreak tended to have lower comprehension. The findings demonstrate that individual characteristics, including age, impact significantly on the extent to which Deaf individuals understand the sign language interpreting of COVID-19 information, indicating that steps are needed to achieve a Deaf-inclusive society during a health disaster.

15.
Sensors (Basel) ; 24(14)2024 Jul 16.
Artículo en Inglés | MEDLINE | ID: mdl-39066011

RESUMEN

The aim of this study is to develop a practical software solution for real-time recognition of sign language words using two arms. This will facilitate communication between hearing-impaired individuals and those who can hear. We are aware of several sign language recognition systems developed using different technologies, including cameras, armbands, and gloves. However, the system we propose in this study stands out for its practicality, utilizing surface electromyography (muscle activity) and inertial measurement unit (motion dynamics) data from both arms. We address the drawbacks of other methods, such as high costs, low accuracy due to ambient light and obstacles, and complex hardware requirements, which have limited their practical application. Our software can run on different operating systems using digital signal processing and machine learning methods specific to this study. For the test, we created a dataset of 80 words based on their frequency of use in daily life and performed a thorough feature extraction process. We tested the recognition performance using various classifiers and parameters and compared the results. The random forest algorithm emerged as the most successful, achieving a remarkable 99.875% accuracy, while the naïve Bayes algorithm had the lowest success rate with 87.625% accuracy. The new system promises to significantly improve communication for people with hearing disabilities and ensures seamless integration into daily life without compromising user comfort or lifestyle quality.


Asunto(s)
Algoritmos , Electromiografía , Lengua de Signos , Dispositivos Electrónicos Vestibles , Humanos , Electromiografía/métodos , Electromiografía/instrumentación , Aprendizaje Automático , Procesamiento de Señales Asistido por Computador , Adulto , Masculino , Femenino , Teorema de Bayes
16.
ACS Appl Mater Interfaces ; 16(29): 38780-38791, 2024 Jul 24.
Artículo en Inglés | MEDLINE | ID: mdl-39010653

RESUMEN

Flexible strain sensors have been widely researched in fields such as smart wearables, human health monitoring, and biomedical applications. However, achieving a wide sensing range and high sensitivity of flexible strain sensors simultaneously remains a challenge, limiting their further applications. To address these issues, a cross-scale combinatorial bionic hierarchical design featuring microscale morphology combined with a macroscale base to balance the sensing range and sensitivity is presented. Inspired by the combination of serpentine and butterfly wing structures, this study employs three-dimensional printing, prestretching, and mold transfer processes to construct a combinatorial bionic hierarchical flexible strain sensor (CBH-sensor) with serpentine-shaped inverted-V-groove/wrinkling-cracking structures. The CBH-sensor has a high wide sensing range of 150% and high sensitivity with a gauge factor of up to 2416.67. In addition, it demonstrates the application of the CBH-sensor array in sign language gesture recognition, successfully identifying nine different sign language gestures with an impressive accuracy of 100% with the assistance of machine learning. The CBH-sensor exhibits considerable promise for use in enabling unobstructed communication between individuals who use sign language and those who do not. Furthermore, it has wide-ranging possibilities for use in the field of gesture-driven interactions in human-computer interfaces.


Asunto(s)
Aprendizaje Automático , Lengua de Signos , Humanos , Biónica , Dispositivos Electrónicos Vestibles , Gestos , Impresión Tridimensional
17.
Int J Womens Health ; 16: 1235-1248, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39045213

RESUMEN

Purpose: Some deaf and hard-of-hearing (DHH) individuals face health information barriers, increasing their risk of diabetes mellitus (DM) and subsequent cancer development. This study examines if health-related quality of life (HRQoL) and deaf patient-reported outcomes (DHH-QoL) mediate the relationship between DM diagnosis and cancer screening adherence among DHH individuals. Patients and Methods: In a cross-sectional study, US DHH adults assigned female at birth answered questions on cervical and breast cancer screenings from the ASL-English bilingual Health Information National Trends Survey (HINTS-ASL) and the PROMIS (Patient Reported Outcome Measurement Information System) Deaf Profile measure's Communication Health and Global Health domains. Odds ratios (OR) and 95% confidence intervals (CI) were obtained from multivariable logistic and linear regression models, examining the association between DM, DHH-QoL, and cancer screening adherence, adjusting for other covariates and HRQoL. A Baron and Kenny causal mediation analysis was used. A two-sided p < 0.05 indicated significance. Results: Most respondents were White (66.4%), heterosexual (66.2%), did not have DM (83.9%), had health insurance (95.5%), and adhered to pap smears (75.7%) and mammograms (76.9%). The average (standard deviation) DHH-QoL score was 50.9 (8.6). Those with DM had lower HRQoL scores (46.2 (9.5) vs 50.2 (8.8); p < 0.0001) than those without. Non-significant multivariable models indicate that those with DM were more adherent to pap testing (OR: 1.48; 95% CI: 0.72, 3.03; p = 0.285) and mammograms (2.18; 95% CI: 0.81, 5.88; p = 0.122), with DHH-QoL scores slightly increasing them to 1.53 (0.74, 3.16; p = 0.250) for pap testing and 2.55 (0.91, 7.13; p = 0.076) for mammograms. DHH-QoL was significantly associated with mammograms (p = 0.027), with 6% increased adherence per unit increase in the score. Overall, HRQoL and DHH-QoL were not significant mediators. Conclusion: While HRQoL/DHH-QoL in DHH individuals with DM does not mediate cancer screening adherence, higher DHH-QoL scores are associated with it. DHH-focused health literacy and communication training can improve cancer-related outcomes.

18.
Front Artif Intell ; 7: 1297347, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38957453

RESUMEN

Addressing the increasing demand for accessible sign language learning tools, this paper introduces an innovative Machine Learning-Driven Web Application dedicated to Sign Language Learning. This web application represents a significant advancement in sign language education. Unlike traditional approaches, the application's unique methodology involves assigning users different words to spell. Users are tasked with signing each letter of the word, earning a point upon correctly signing the entire word. The paper delves into the development, features, and the machine learning framework underlying the application. Developed using HTML, CSS, JavaScript, and Flask, the web application seamlessly accesses the user's webcam for a live video feed, displaying the model's predictions on-screen to facilitate interactive practice sessions. The primary aim is to provide a learning platform for those who are not familiar with sign language, offering them the opportunity to acquire this essential skill and fostering inclusivity in the digital age.

19.
Acta Neurochir (Wien) ; 166(1): 260, 2024 Jun 10.
Artículo en Inglés | MEDLINE | ID: mdl-38858238

RESUMEN

The aim of this case study was to describe differences in English and British Sign Language (BSL) communication caused by a left temporal tumour resulting in discordant presentation of symptoms, intraoperative stimulation mapping during awake craniotomy and post-operative language abilities. We report the first case of a hearing child of deaf adults, who acquired BSL with English as a second language. The patient presented with English word finding difficulty, phonemic paraphasias, and reading and writing challenges, with BSL preserved. Intraoperatively, object naming and semantic fluency tasks were performed in English and BSL, revealing differential language maps for each modality. Post-operative assessment confirmed mild dysphasia for English with BSL preserved. These findings suggest that in hearing people who acquire a signed language as a first language, topographical organisation may differ to that of a second, spoken, language.


Asunto(s)
Neoplasias Encefálicas , Craneotomía , Glioblastoma , Lengua de Signos , Lóbulo Temporal , Humanos , Glioblastoma/cirugía , Craneotomía/métodos , Neoplasias Encefálicas/cirugía , Neoplasias Encefálicas/complicaciones , Neoplasias Encefálicas/diagnóstico por imagen , Lóbulo Temporal/cirugía , Lóbulo Temporal/diagnóstico por imagen , Mapeo Encefálico/métodos , Masculino , Vigilia/fisiología , Habla/fisiología , Multilingüismo , Lenguaje , Adulto
20.
Biomed Tech (Berl) ; 2024 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-38826069

RESUMEN

OBJECTIVES: The objective of this study is to develop a system for automatic sign language recognition to improve the quality of life for the mute-deaf community in Egypt. The system aims to bridge the communication gap by identifying and converting right-hand gestures into audible sounds or displayed text. METHODS: To achieve the objectives, a convolutional neural network (CNN) model is employed. The model is trained to recognize right-hand gestures captured by an affordable web camera. A dataset was created with the help of six volunteers for training, testing, and validation purposes. RESULTS: The proposed system achieved an impressive average accuracy of 99.65 % in recognizing right-hand gestures, with high precision value of 95.11 %. The system effectively addressed the issue of gesture similarity between certain alphabets by successfully distinguishing between their respective gestures. CONCLUSIONS: The proposed system offers a promising solution for automatic sign language recognition, benefiting the mute-deaf community in Egypt. By accurately identifying and converting right-hand gestures, the system facilitates communication and interaction with the wider world. This technology has the potential to greatly enhance the quality of life for individuals who are unable to speak or hear, promoting inclusivity and accessibility.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...