Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 3.661
Filtrar
1.
Sensors (Basel) ; 24(13)2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-39000981

RESUMO

This work presents a novel approach for elbow gesture recognition using an array of inductive sensors and a machine learning algorithm (MLA). This paper describes the design of the inductive sensor array integrated into a flexible and wearable sleeve. The sensor array consists of coils sewn onto the sleeve, which form an LC tank circuit along with the externally connected inductors and capacitors. Changes in the elbow position modulate the inductance of these coils, allowing the sensor array to capture a range of elbow movements. The signal processing and random forest MLA to recognize 10 different elbow gestures are described. Rigorous evaluation on 8 subjects and data augmentation, which leveraged the dataset to 1270 trials per gesture, enabled the system to achieve remarkable accuracy of 98.3% and 98.5% using 5-fold cross-validation and leave-one-subject-out cross-validation, respectively. The test performance was then assessed using data collected from five new subjects. The high classification accuracy of 94% demonstrates the generalizability of the designed system. The proposed solution addresses the limitations of existing elbow gesture recognition designs and offers a practical and effective approach for intuitive human-machine interaction.


Assuntos
Algoritmos , Cotovelo , Gestos , Aprendizado de Máquina , Humanos , Cotovelo/fisiologia , Dispositivos Eletrônicos Vestíveis , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Masculino , Adulto , Feminino
2.
Cogn Sci ; 48(7): e13479, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38980965

RESUMO

Gestures-hand movements that accompany speech and express ideas-can help children learn how to solve problems, flexibly generalize learning to novel problem-solving contexts, and retain what they have learned. But does it matter who is doing the gesturing? We know that producing gesture leads to better comprehension of a message than watching someone else produce gesture. But we do not know how producing versus observing gesture impacts deeper learning outcomes such as generalization and retention across time. Moreover, not all children benefit equally from gesture instruction, suggesting that there are individual differences that may play a role in who learns from gesture. Here, we consider two factors that might impact whether gesture leads to learning, generalization, and retention after mathematical instruction: (1) whether children see gesture or do gesture and (2) whether a child spontaneously gestures before instruction when explaining their problem-solving reasoning. For children who spontaneously gestured before instruction, both doing and seeing gesture led to better generalization and retention of the knowledge gained than a comparison manipulative action. For children who did not spontaneously gesture before instruction, doing gesture was less effective than the comparison action for learning, generalization, and retention. Importantly, this learning deficit was specific to gesture, as these children did benefit from doing the comparison manipulative action. Our findings are the first evidence that a child's use of a particular representational format for communication (gesture) directly predicts that child's propensity to learn from using the same representational format.


Assuntos
Gestos , Aprendizagem , Resolução de Problemas , Humanos , Feminino , Masculino , Matemática , Criança , Pré-Escolar , Generalização Psicológica/fisiologia
3.
Hum Brain Mapp ; 45(11): e26762, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39037079

RESUMO

Hierarchical models have been proposed to explain how the brain encodes actions, whereby different areas represent different features, such as gesture kinematics, target object, action goal, and meaning. The visual processing of action-related information is distributed over a well-known network of brain regions spanning separate anatomical areas, attuned to specific stimulus properties, and referred to as action observation network (AON). To determine the brain organization of these features, we measured representational geometries during the observation of a large set of transitive and intransitive gestures in two independent functional magnetic resonance imaging experiments. We provided evidence for a partial dissociation between kinematics, object characteristics, and action meaning in the occipito-parietal, ventro-temporal, and lateral occipito-temporal cortex, respectively. Importantly, most of the AON showed low specificity to all the explored features, and representational spaces sharing similar information content were spread across the cortex without being anatomically adjacent. Overall, our results support the notion that the AON relies on overlapping and distributed coding and may act as a unique representational space instead of mapping features in a modular and segregated manner.


Assuntos
Mapeamento Encefálico , Gestos , Imageamento por Ressonância Magnética , Humanos , Masculino , Feminino , Fenômenos Biomecânicos/fisiologia , Adulto , Adulto Jovem , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Estimulação Luminosa/métodos , Sensibilidade e Especificidade
4.
J Psycholinguist Res ; 53(4): 56, 2024 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-38926243

RESUMO

The present paper examines how English native speakers produce scopally ambiguous sentences and how they make use of gestures and prosody for disambiguation. As a case in point, the participants in the present study produced the English negative quantifiers. They appear in two different positions as (1) The election of no candidate was a surprise (a: 'for those elected, none of them was a surprise'; b: 'no candidate was elected, and that was a surprise') and (2) no candidate's election was a surprise (a: 'for those elected, none of them was a surprise'; b: # 'no candidate was elected, and that was a surprise.' We were able to investigate the gesture production and the prosodic patterns of the positional effects (i.e., a-interpretation is available at two different positions in 1 and 2) and the interpretation effects (i.e., two different interpretations are available in the same position in 1). We discovered that the participants tended to launch more head shakes in the (a) interpretation despites the different positions, but more head nod/beat in the (b) interpretation. While there is not a difference in prosody of no in (a) and (b) interpretation in (1), there are pitch and durational differences between (a) interpretations in (1) and (2). This study points out the abstract similarities across languages such as Catalan and Spanish (Prieto et al. in Lingua 131:136-150, 2013. 10.1016/j.lingua.2013.02.008; Tubau et al. in Linguist Rev 32(1):115-142, 2015. 10.1515/tlr-2014-0016) in the gestural movements, and the meaning is crucial for gesture patterns. We emphasize that gesture patterns disambiguate ambiguous interpretation when prosody cannot do so.


Assuntos
Gestos , Psicolinguística , Humanos , Adulto , Masculino , Feminino , Fala/fisiologia , Idioma , Adulto Jovem
5.
Sensors (Basel) ; 24(11)2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38894423

RESUMO

Gesture recognition using electromyography (EMG) signals has prevailed recently in the field of human-computer interactions for controlling intelligent prosthetics. Currently, machine learning and deep learning are the two most commonly employed methods for classifying hand gestures. Despite traditional machine learning methods already achieving impressive performance, it is still a huge amount of work to carry out feature extraction manually. The existing deep learning methods utilize complex neural network architectures to achieve higher accuracy, which will suffer from overfitting, insufficient adaptability, and low recognition accuracy. To improve the existing phenomenon, a novel lightweight model named dual stream LSTM feature fusion classifier is proposed based on the concatenation of five time-domain features of EMG signals and raw data, which are both processed with one-dimensional convolutional neural networks and LSTM layers to carry out the classification. The proposed method can effectively capture global features of EMG signals using a simple architecture, which means less computational cost. An experiment is conducted on a public DB1 dataset with 52 gestures, and each of the 27 subjects repeats every gesture 10 times. The accuracy rate achieved by the model is 89.66%, which is comparable to that achieved by more complex deep learning neural networks, and the inference time for each gesture is 87.6 ms, which can also be implied in a real-time control system. The proposed model is validated using a subject-wise experiment on 10 out of the 40 subjects in the DB2 dataset, achieving a mean accuracy of 91.74%. This is illustrated by its ability to fuse time-domain features and raw data to extract more effective information from the sEMG signal and select an appropriate, efficient, lightweight network to enhance the recognition results.


Assuntos
Aprendizado Profundo , Eletromiografia , Gestos , Redes Neurais de Computação , Eletromiografia/métodos , Humanos , Processamento de Sinais Assistido por Computador , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Aprendizado de Máquina , Mãos/fisiologia , Memória de Curto Prazo/fisiologia
6.
Sensors (Basel) ; 24(11)2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38894429

RESUMO

Effective feature extraction and selection are crucial for the accurate classification and prediction of hand gestures based on electromyographic signals. In this paper, we systematically compare six filter and wrapper feature evaluation methods and investigate their respective impacts on the accuracy of gesture recognition. The investigation is based on several benchmark datasets and one real hand gesture dataset, including 15 hand force exercises collected from 14 healthy subjects using eight commercial sEMG sensors. A total of 37 time- and frequency-domain features were extracted from each sEMG channel. The benchmark dataset revealed that the minimum Redundancy Maximum Relevance (mRMR) feature evaluation method had the poorest performance, resulting in a decrease in classification accuracy. However, the RFE method demonstrated the potential to enhance classification accuracy across most of the datasets. It selected a feature subset comprising 65 features, which led to an accuracy of 97.14%. The Mutual Information (MI) method selected 200 features to reach an accuracy of 97.38%. The Feature Importance (FI) method reached a higher accuracy of 97.62% but selected 140 features. Further investigations have shown that selecting 65 and 75 features with the RFE methods led to an identical accuracy of 97.14%. A thorough examination of the selected features revealed the potential for three additional features from three specific sensors to enhance the classification accuracy to 97.38%. These results highlight the significance of employing an appropriate feature selection method to significantly reduce the number of necessary features while maintaining classification accuracy. They also underscore the necessity for further analysis and refinement to achieve optimal solutions.


Assuntos
Eletromiografia , Gestos , Mãos , Humanos , Eletromiografia/métodos , Mãos/fisiologia , Algoritmos , Masculino , Adulto , Feminino , Processamento de Sinais Assistido por Computador
7.
Sensors (Basel) ; 24(11)2024 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-38894473

RESUMO

Sign language is an essential means of communication for individuals with hearing disabilities. However, there is a significant shortage of sign language interpreters in some languages, especially in Saudi Arabia. This shortage results in a large proportion of the hearing-impaired population being deprived of services, especially in public places. This paper aims to address this gap in accessibility by leveraging technology to develop systems capable of recognizing Arabic Sign Language (ArSL) using deep learning techniques. In this paper, we propose a hybrid model to capture the spatio-temporal aspects of sign language (i.e., letters and words). The hybrid model consists of a Convolutional Neural Network (CNN) classifier to extract spatial features from sign language data and a Long Short-Term Memory (LSTM) classifier to extract spatial and temporal characteristics to handle sequential data (i.e., hand movements). To demonstrate the feasibility of our proposed hybrid model, we created a dataset of 20 different words, resulting in 4000 images for ArSL: 10 static gesture words and 500 videos for 10 dynamic gesture words. Our proposed hybrid model demonstrates promising performance, with the CNN and LSTM classifiers achieving accuracy rates of 94.40% and 82.70%, respectively. These results indicate that our approach can significantly enhance communication accessibility for the hearing-impaired community in Saudi Arabia. Thus, this paper represents a major step toward promoting inclusivity and improving the quality of life for the hearing impaired.


Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Língua de Sinais , Humanos , Arábia Saudita , Idioma , Gestos
8.
Appetite ; 200: 107552, 2024 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-38885742

RESUMO

Assisted eating is a basic caring practice and the means through which many individuals receive adequate nutrition. Research in this area has noted the challenges of helping others to eat while upholding their independence, though has yet to explicate how this caring practice is achieved in detail and across the lifespan. This paper provides an empirical analysis of assisted eating episodes in two different institutions, detailing the processes through which eating is collaboratively achieved between two persons. Data are video-recorded episodes of infants during preschool lunches and care home meals for adults with dementia, both located in Sweden. Using EMCA's multimodal interaction analysis, three core stages of assisted eating and their underpinning embodied practices were identified: (1) establishing joint attention, (2) offering the food, and (3) transferring food into the mouth. The first stage is particularly crucial in establishing the activity as a collaborative process. The analysis details the interactional practices through which assisted eating becomes a joint accomplishment using a range of multimodal features such as eye gaze, hand gestures, and vocalisations. The paper thus demonstrates how assisted eating becomes a caring practice through the active participation of both caregiver and cared-for person, according to their needs. The analysis has implications not only for professional caring work in institutional settings but also for the detailed analysis of eating as an embodied activity.


Assuntos
Gestos , Humanos , Suécia , Feminino , Masculino , Lactente , Demência/psicologia , Cuidadores/psicologia , Ingestão de Alimentos/psicologia , Pré-Escolar , Comportamento Alimentar/psicologia , Idoso , Refeições/psicologia , Atenção
9.
Sensors (Basel) ; 24(12)2024 Jun 09.
Artigo em Inglês | MEDLINE | ID: mdl-38931542

RESUMO

This review explores the historical and current significance of gestures as a universal form of communication with a focus on hand gestures in virtual reality applications. It highlights the evolution of gesture detection systems from the 1990s, which used computer algorithms to find patterns in static images, to the present day where advances in sensor technology, artificial intelligence, and computing power have enabled real-time gesture recognition. The paper emphasizes the role of hand gestures in virtual reality (VR), a field that creates immersive digital experiences through the Ma blending of 3D modeling, sound effects, and sensing technology. This review presents state-of-the-art hardware and software techniques used in hand gesture detection, primarily for VR applications. It discusses the challenges in hand gesture detection, classifies gestures as static and dynamic, and grades their detection difficulty. This paper also reviews the haptic devices used in VR and their advantages and challenges. It provides an overview of the process used in hand gesture acquisition, from inputs and pre-processing to pose detection, for both static and dynamic gestures.


Assuntos
Gestos , Mãos , Realidade Virtual , Humanos , Mãos/fisiologia , Algoritmos , Interface Usuário-Computador , Inteligência Artificial
10.
Sci Rep ; 14(1): 14873, 2024 06 27.
Artigo em Inglês | MEDLINE | ID: mdl-38937537

RESUMO

Smart gloves are in high demand for entertainment, manufacturing, and rehabilitation. However, designing smart gloves has been complex and costly due to trial and error. We propose an open simulation platform for designing smart gloves, including optimal sensor placement and deep learning models for gesture recognition, with reduced costs and manual effort. Our pipeline starts with 3D hand pose extraction from videos and extends to the refinement and conversion of the poses into hand joint angles based on inverse kinematics, the sensor placement optimization based on hand joint analysis, and the training of deep learning models using simulated sensor data. In comparison to the existing platforms that always require precise motion data as input, our platform takes monocular videos, which can be captured with widely available smartphones or web cameras, as input and integrates novel approaches to minimize the impact of the errors induced by imprecise motion extraction from videos. Moreover, our platform enables more efficient sensor placement selection. We demonstrate how the pipeline works and how it delivers a sensible design for smart gloves in a real-life case study. We also evaluate the performance of each building block and its impact on the reliability of the generated design.


Assuntos
Gestos , Humanos , Mãos/fisiologia , Aprendizado Profundo , Fenômenos Biomecânicos , Simulação por Computador , Desenho de Equipamento
11.
Nat Commun ; 15(1): 4791, 2024 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-38839754

RESUMO

The planum temporale (PT), a key language area, is specialized in the left hemisphere in prelinguistic infants and considered as a marker of the pre-wired language-ready brain. However, studies have reported a similar structural PT left-asymmetry not only in various adult non-human primates, but also in newborn baboons. Its shared functional links with language are not fully understood. Here we demonstrate using previously obtained MRI data that early detection of PT left-asymmetry among 27 newborn baboons (Papio anubis, age range of 4 days to 2 months) predicts the future development of right-hand preference for communicative gestures but not for non-communicative actions. Specifically, only newborns with a larger left-than-right PT were more likely to develop a right-handed communication once juvenile, a contralateral brain-gesture link which is maintained in a group of 70 mature baboons. This finding suggests that early PT asymmetry may be a common inherited prewiring of the primate brain for the ontogeny of ancient lateralised properties shared between monkey gesture and human language.


Assuntos
Animais Recém-Nascidos , Lateralidade Funcional , Gestos , Imageamento por Ressonância Magnética , Animais , Lateralidade Funcional/fisiologia , Feminino , Masculino , Papio anubis , Lobo Temporal/fisiologia , Lobo Temporal/diagnóstico por imagem , Idioma
12.
Artigo em Inglês | MEDLINE | ID: mdl-38869995

RESUMO

Gesture recognition is crucial for enhancing human-computer interaction and is particularly pivotal in rehabilitation contexts, aiding individuals recovering from physical impairments and significantly improving their mobility and interactive capabilities. However, current wearable hand gesture recognition approaches are often limited in detection performance, wearability, and generalization. We thus introduce EchoGest, a novel hand gesture recognition system based on soft, stretchable, transparent artificial skin with integrated ultrasonic waveguides. Our presented system is the first to use soft ultrasonic waveguides for hand gesture recognition. EcoflexTM 00-31 and EcoflexTM 00-45 Near ClearTM silicone elastomers were employed to fabricate the artificial skin and ultrasonic waveguides, while 0.1 mm diameter silver-plated copper wires connected the transducers in the waveguides to the electrical system. The wires are enclosed within an additional elastomer layer, achieving a sensing skin with a total thickness of around 500 µ m. Ten participants wore the EchoGest system and performed static hand gestures from two gesture sets: 8 daily life gestures and 10 American Sign Language (ASL) digits 0-9. Leave-One-Subject-Out Cross-Validation analysis demonstrated accuracies of 91.13% for daily life gestures and 88.5% for ASL gestures. The EchoGest system has significant potential in rehabilitation, particularly for tracking and evaluating hand mobility, which could substantially reduce the workload of therapists in both clinical and home-based settings. Integrating this technology could revolutionize hand gesture recognition applications, from real-time sign language translation to innovative rehabilitation techniques.


Assuntos
Gestos , Mãos , Reconhecimento Automatizado de Padrão , Dispositivos Eletrônicos Vestíveis , Humanos , Feminino , Mãos/fisiologia , Adulto , Masculino , Reconhecimento Automatizado de Padrão/métodos , Adulto Jovem , Ultrassom , Algoritmos , Elastômeros de Silicone , Pele , Reprodutibilidade dos Testes
13.
Cognition ; 250: 105855, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38865912

RESUMO

People are more likely to gesture when their speech is disfluent. Why? According to an influential proposal, speakers gesture when they are disfluent because gesturing helps them to produce speech. Here, we test an alternative proposal: People may gesture when their speech is disfluent because gestures serve as a pragmatic signal, telling the listener that the speaker is having problems with speaking. To distinguish between these proposals, we tested the relationship between gestures and speech disfluencies when listeners could see speakers' gestures and when they were prevented from seeing their gestures. If gesturing helps speakers to produce words, then the relationship between gesture and disfluency should persist regardless of whether gestures can be seen. Alternatively, if gestures during disfluent speech are pragmatically motivated, then the tendency to gesture more when speech is disfluent should disappear when the speaker's gestures are invisible to the listener. Results showed that speakers were more likely to gesture when their speech was disfluent, but only when the listener could see their gestures and not when the listener was prevented from seeing them, supporting a pragmatic account of the relationship between gestures and disfluencies. People tend to gesture more when speaking is difficult, not because gesturing facilitates speech production, but rather because gestures comment on the speaker's difficulty presenting an utterance to the listener.


Assuntos
Gestos , Fala , Humanos , Fala/fisiologia , Feminino , Masculino , Adulto , Adulto Jovem , Percepção da Fala/fisiologia
14.
Exp Brain Res ; 242(8): 1831-1840, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38842756

RESUMO

Recent studies on the imitation of intransitive gestures suggest that the body part effect relies mainly upon the direct route of the dual-route model through a visuo-transformation mechanism. Here, we test the visuo-constructive hypothesis which posits that the visual complexity may directly potentiate the body part effect for meaningless gestures. We predicted that the difference between imitation of hand and finger gestures would increase with the visuo-spatial complexity of gestures. Second, we aimed to identify some of the visuo-spatial predictors of meaningless finger imitation skills. Thirty-eight participants underwent an imitation task containing three distinct set of gestures, that is, meaningful gestures, meaningless gestures with low visual complexity, and meaningless gestures with higher visual complexity than the first set of meaningless gestures. Our results were in general agreement with the visuo-constructive hypothesis, showing an increase in the difference between hand and finger gestures, but only for meaningless gestures with higher visuo-spatial complexity. Regression analyses confirm that imitation accuracy decreases with resource-demanding visuo-spatial factors. Taken together, our results suggest that the body part effect is highly dependent on the visuo-spatial characteristics of the gestures.


Assuntos
Gestos , Comportamento Imitativo , Percepção Espacial , Humanos , Masculino , Feminino , Comportamento Imitativo/fisiologia , Adulto Jovem , Adulto , Percepção Espacial/fisiologia , Desempenho Psicomotor/fisiologia , Mãos/fisiologia , Percepção Visual/fisiologia
15.
J Exp Psychol Gen ; 153(7): 1904-1919, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38842887

RESUMO

The ecology of human communication is face to face. In these contexts, speakers dynamically modify their communication across vocal (e.g., speaking rate) and gestural (e.g., cospeech gestures related in meaning to the content of speech) channels while speaking. What is the function of these adjustments? Here we ask whether speakers dynamically make these adjustments to increase communicative success, and decrease cognitive effort while speaking. We assess whether speakers modulate word durations and produce iconic (i.e., imagistically evoking properties of referents) gestures depending on the predictability of each word they utter. Predictability is operationalized as surprisal and computed from computational language models trained on corpora of child-directed, or adult-directed language. Using data from a novel corpus (Ecological Language Corpus) of naturalistic interactions between adult-child (aged 3-4), and adult-adult, we show that surprisal predicts speakers' multimodal adjustments and that some of these effects are modulated by whether the comprehender is a child or an adult. Thus, communicative efficiency applies generally across vocal and gestural communicative channels not being limited to structural properties of language or vocal modality. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Assuntos
Gestos , Humanos , Adulto , Feminino , Masculino , Pré-Escolar , Fala/fisiologia , Idioma , Comunicação
16.
J Speech Lang Hear Res ; 67(7): 2283-2296, 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38861424

RESUMO

PURPOSE: The current study examined the predictive role of gestures and gesture-speech combinations on later spoken language outcomes in minimally verbal (MV) autistic children enrolled in a blended naturalistic developmental/behavioral intervention (Joint Attention, Symbolic Play, Engagement, and Regulation [JASPER] + Enhanced Milieu Teaching [EMT]). METHOD: Participants were 50 MV autistic children (40 boys), ages 54-105 months (M = 75.54, SD = 16.45). MV was defined as producing fewer than 20 spontaneous, unique, and socially communicative words. Autism symptom severity (Autism Diagnostic Observation Schedule-Second Edition) and nonverbal cognitive skills (Leiter-R Brief IQ) were assessed at entry. A natural language sample (NLS), a 20-min examiner-child interaction with specified toys, was collected at entry (Week 1) and exit (Week 18) from JASPER + EMT intervention. The NLS was coded for gestures (deictic, conventional, and representational) and gesture-speech combinations (reinforcing, disambiguating, supplementary, other) at entry and spoken language outcomes: speech quantity (rate of speech utterances) and speech quality (number of different words [NDW] and mean length of utterance in words [MLUw]) at exit using European Distributed Corpora Project Linguistic Annotator and Systematic Analysis of Language Transcripts. RESULTS: Controlling for nonverbal IQ and autism symptom severity at entry, rate of gesture-speech combinations (but not gestures alone) at entry was a significant predictor of rate of speech utterances and MLUw at exit. The rate of supplementary gesture-speech combinations, in particular, significantly predicted rate of speech utterances and NDW at exit. CONCLUSION: These findings highlight the critical importance of gestural communication, particularly gesture-speech (supplementary) combinations in supporting spoken language development in MV autistic children.


Assuntos
Transtorno Autístico , Gestos , Fala , Humanos , Masculino , Feminino , Pré-Escolar , Criança , Transtorno Autístico/psicologia , Linguagem Infantil , Transtornos do Desenvolvimento da Linguagem/psicologia
17.
J Neuroeng Rehabil ; 21(1): 100, 2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38867287

RESUMO

BACKGROUND: In-home rehabilitation systems are a promising, potential alternative to conventional therapy for stroke survivors. Unfortunately, physiological differences between participants and sensor displacement in wearable sensors pose a significant challenge to classifier performance, particularly for people with stroke who may encounter difficulties repeatedly performing trials. This makes it challenging to create reliable in-home rehabilitation systems that can accurately classify gestures. METHODS: Twenty individuals who suffered a stroke performed seven different gestures (mass flexion, mass extension, wrist volar flexion, wrist dorsiflexion, forearm pronation, forearm supination, and rest) related to activities of daily living. They performed these gestures while wearing EMG sensors on the forearm, as well as FMG sensors and an IMU on the wrist. We developed a model based on prototypical networks for one-shot transfer learning, K-Best feature selection, and increased window size to improve model accuracy. Our model was evaluated against conventional transfer learning with neural networks, as well as subject-dependent and subject-independent classifiers: neural networks, LGBM, LDA, and SVM. RESULTS: Our proposed model achieved 82.2% hand-gesture classification accuracy, which was better (P<0.05) than one-shot transfer learning with neural networks (63.17%), neural networks (59.72%), LGBM (65.09%), LDA (63.35%), and SVM (54.5%). In addition, our model performed similarly to subject-dependent classifiers, slightly lower than SVM (83.84%) but higher than neural networks (81.62%), LGBM (80.79%), and LDA (74.89%). Using K-Best features improved the accuracy in 3 of the 6 classifiers used for evaluation, while not affecting the accuracy in the other classifiers. Increasing the window size improved the accuracy of all the classifiers by an average of 4.28%. CONCLUSION: Our proposed model showed significant improvements in hand-gesture recognition accuracy in individuals who have had a stroke as compared with conventional transfer learning, neural networks and traditional machine learning approaches. In addition, K-Best feature selection and increased window size can further improve the accuracy. This approach could help to alleviate the impact of physiological differences and create a subject-independent model for stroke survivors that improves the classification accuracy of wearable sensors. TRIAL REGISTRATION NUMBER: The study was registered in Chinese Clinical Trial Registry with registration number CHiCTR1800017568 in 2018/08/04.


Assuntos
Gestos , Mãos , Redes Neurais de Computação , Reabilitação do Acidente Vascular Cerebral , Humanos , Reabilitação do Acidente Vascular Cerebral/métodos , Reabilitação do Acidente Vascular Cerebral/instrumentação , Mãos/fisiopatologia , Masculino , Feminino , Pessoa de Meia-Idade , Acidente Vascular Cerebral/complicações , Acidente Vascular Cerebral/fisiopatologia , Idoso , Aprendizado de Máquina , Transferência de Experiência/fisiologia , Adulto , Eletromiografia , Dispositivos Eletrônicos Vestíveis
18.
PLoS One ; 19(6): e0288670, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38870182

RESUMO

Through our respiratory system, many viruses and diseases frequently spread and pass from one person to another. Covid-19 served as an example of how crucial it is to track down and cut back on contacts to stop its spread. There is a clear gap in finding automatic methods that can detect hand-to-face contact in complex urban scenes or indoors. In this paper, we introduce a computer vision framework, called FaceTouch, based on deep learning. It comprises deep sub-models to detect humans and analyse their actions. FaceTouch seeks to detect hand-to-face touches in the wild, such as through video chats, bus footage, or CCTV feeds. Despite partial occlusion of faces, the introduced system learns to detect face touches from the RGB representation of a given scene by utilising the representation of the body gestures such as arm movement. This has been demonstrated to be useful in complex urban scenarios beyond simply identifying hand movement and its closeness to faces. Relying on Supervised Contrastive Learning, the introduced model is trained on our collected dataset, given the absence of other benchmark datasets. The framework shows a strong validation in unseen datasets which opens the door for potential deployment.


Assuntos
COVID-19 , Humanos , SARS-CoV-2/isolamento & purificação , Tato/fisiologia , Aprendizado Profundo , Mãos/fisiologia , Busca de Comunicante/métodos , Aprendizado de Máquina Supervisionado , Gestos , Face
19.
Sensors (Basel) ; 24(12)2024 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-38931754

RESUMO

Electromyography-based gesture recognition has become a challenging problem in the decoding of fine hand movements. Recent research has focused on improving the accuracy of gesture recognition by increasing the complexity of network models. However, training a complex model necessitates a significant amount of data, thereby escalating both user burden and computational costs. Moreover, owing to the considerable variability of surface electromyography (sEMG) signals across different users, conventional machine learning approaches reliant on a single feature fail to meet the demand for precise gesture recognition tailored to individual users. Therefore, to solve the problems of large computational cost and poor cross-user pattern recognition performance, we propose a feature selection method that combines mutual information, principal component analysis and the Pearson correlation coefficient (MPP). This method can filter out the optimal subset of features that match a specific user while combining with an SVM classifier to accurately and efficiently recognize the user's gesture movements. To validate the effectiveness of the above method, we designed an experiment including five gesture actions. The experimental results show that compared to the classification accuracy obtained using a single feature, we achieved an improvement of about 5% with the optimally selected feature as the input to any of the classifiers. This study provides an effective guarantee for user-specific fine hand movement decoding based on sEMG signals.


Assuntos
Eletromiografia , Antebraço , Gestos , Mãos , Reconhecimento Automatizado de Padrão , Humanos , Eletromiografia/métodos , Mãos/fisiologia , Antebraço/fisiologia , Reconhecimento Automatizado de Padrão/métodos , Masculino , Adulto , Análise de Componente Principal , Feminino , Algoritmos , Movimento/fisiologia , Adulto Jovem , Máquina de Vetores de Suporte , Aprendizado de Máquina
20.
J Robot Surg ; 18(1): 245, 2024 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-38847926

RESUMO

Previously, our group established a surgical gesture classification system that deconstructs robotic tissue dissection into basic surgical maneuvers. Here, we evaluate gestures by correlating the metric with surgeon experience and technical skill assessment scores in the apical dissection (AD) of robotic-assisted radical prostatectomy (RARP). Additionally, we explore the association between AD performance and early continence recovery following RARP. 78 AD surgical videos from 2016 to 2018 across two international institutions were included. Surgeons were grouped by median robotic caseload (range 80-5,800 cases): less experienced group (< 475 cases) and more experienced (≥ 475 cases). Videos were decoded with gestures and assessed using Dissection Assessment for Robotic Technique (DART). Statistical findings revealed more experienced surgeons (n = 10) used greater proportions of cold cut (p = 0.008) and smaller proportions of peel/push, spread, and two-hand spread (p < 0.05) than less experienced surgeons (n = 10). Correlations between gestures and technical skills assessments ranged from - 0.397 to 0.316 (p < 0.05). Surgeons utilizing more retraction gestures had lower total DART scores (p < 0.01), suggesting less dissection proficiency. Those who used more gestures and spent more time per gesture had lower efficiency scores (p < 0.01). More coagulation and hook gestures were found in cases of patients with continence recovery compared to those with ongoing incontinence (p < 0.04). Gestures performed during AD vary based on surgeon experience level and patient continence recovery duration. Significant correlations were demonstrated between gestures and dissection technical skills. Gestures can serve as a novel method to objectively evaluate dissection performance and anticipate outcomes.


Assuntos
Competência Clínica , Dissecação , Prostatectomia , Procedimentos Cirúrgicos Robóticos , Prostatectomia/métodos , Humanos , Procedimentos Cirúrgicos Robóticos/métodos , Masculino , Dissecação/métodos , Gestos , Neoplasias da Próstata/cirurgia , Cirurgiões
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA