Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 3.505
Filtrar
1.
Sensors (Basel) ; 23(17)2023 Aug 26.
Artigo em Inglês | MEDLINE | ID: mdl-37687896

RESUMO

We investigate the distribution of muscle signatures of human hand gestures under Dynamic Time Warping. For this we present a k-Nearest-Neighbors classifier using Dynamic Time Warping for the distance estimate. To understand the resulting classification performance, we investigate the distribution of the recorded samples and derive a method of assessing the separability of a set of gestures. In addition to this, we present and evaluate two approaches with reduced real-time computational cost with regards to their effectiveness and the mechanics behind them. We further investigate the impact of different parameters with regards to practical usability and background rejection, allowing fine-tuning of the induced classification procedure.


Assuntos
Gestos , Músculos , Humanos , Análise por Conglomerados , Registros , Extremidade Superior
2.
Sensors (Basel) ; 23(17)2023 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-37687978

RESUMO

Gestures have been used for nonverbal communication for a long time, but human-computer interaction (HCI) via gestures is becoming more common in the modern era. To obtain a greater recognition rate, the traditional interface comprises various devices, such as gloves, physical controllers, and markers. This study provides a new markerless technique for obtaining gestures without the need for any barriers or pricey hardware. In this paper, dynamic gestures are first converted into frames. The noise is removed, and intensity is adjusted for feature extraction. The hand gesture is first detected through the images, and the skeleton is computed through mathematical computations. From the skeleton, the features are extracted; these features include joint color cloud, neural gas, and directional active model. After that, the features are optimized, and a selective feature set is passed through the classifier recurrent neural network (RNN) to obtain the classification results with higher accuracy. The proposed model is experimentally assessed and trained over three datasets: HaGRI, Egogesture, and Jester. The experimental results for the three datasets provided improved results based on classification, and the proposed system achieved an accuracy of 92.57% over HaGRI, 91.86% over Egogesture, and 91.57% over the Jester dataset, respectively. Also, to check the model liability, the proposed method was tested on the WLASL dataset, attaining 90.43% accuracy. This paper also includes a comparison with other-state-of-the art methods to compare our model with the standard methods of recognition. Our model presented a higher accuracy rate with a markerless approach to save money and time for classifying the gestures for better interaction.


Assuntos
Gestos , Agentes Neurotóxicos , Humanos , Automação , Redes Neurais de Computação , Reconhecimento Psicológico
3.
Behav Brain Res ; 453: 114629, 2023 09 13.
Artigo em Inglês | MEDLINE | ID: mdl-37586564

RESUMO

OBJECTIVE: Blind individuals suffer from visual (i.e., sensory) deprivation. So-called "blindisms" (or "nervous" movements) have been described as the nonverbal consequence of such deprivation. However, the neuropsychological functions of such behaviours of blind individuals have not been investigated yet. We therefore analyzed the nonverbal hand movement and gestural behaviour of blind individuals with the hypothesis that their nonverbal expressions rather serve their own mental state than the nonverbal (/gestural) depiction of (mental) images. METHODS: The (entire) nonverbal hand movement and gestural behaviour of right-handed healthy blind, (matched) sighted, and (matched) sighted/blindfolded individuals was analyzed during a standardized interview situation (about emotions and actions) by four independent (certified) raters employing the Neuropsychological Gesture (NEUROGES) Coding System. RESULTS: The results show no difference of the overall hand movement activity between blind, sighted, and sighted/blindfolded individuals. Increased position shifts and on body focused hand movements were found in blind individuals when compared to sighted and sighted/blindfolded individuals. Sighted but neither blind nor sighted/blindfolded individuals increase egocentric deictic and pantomime gestures during the re-narration of an audio story. DISCUSSION: Blind individuals seem to desynchronize during conversation (shifts), increase self-stimulation behaviour due to sensory deprivation (on body), but reduce the nonverbal transfer of mental images via hand gestures. We therefore conclude that nonverbal hand movements of blind individuals rather serve their own mental state but not for the transfer of mental images.


Assuntos
Gestos , Movimento , Humanos , Visão Ocular , Extremidade Superior , Mãos , Cegueira
4.
Stud Health Technol Inform ; 306: 481-486, 2023 Aug 23.
Artigo em Inglês | MEDLINE | ID: mdl-37638952

RESUMO

We developed a gesture interface (AAGI) for individuals with motor dysfunction who cannot use standard interface switches. These users have cerebral palsy, quadriplegia, or traumatic brain injury and experience involuntary movement, spasticity, and so on. In this paper, we describe a disabled user who utilizes a mouth stick for laptop PC input in daily life. Our objective is to lower the burden on his body by using gestures. To this end, we developed a "home position" for the head that enables gestures to coexist with the mouse stick usage. The results of basic experiments with five healthy participants indicate that our system has reached the level where it can be applied to actual disabled persons. Finally, we applied the system to a user with cerebral palsy asked him to perform web browsing.


Assuntos
Lesões Encefálicas Traumáticas , Paralisia Cerebral , Masculino , Animais , Camundongos , Humanos , Gestos , Face , Voluntários Saudáveis
5.
Sensors (Basel) ; 23(16)2023 Aug 10.
Artigo em Inglês | MEDLINE | ID: mdl-37631602

RESUMO

Automatic hand gesture recognition in video sequences has widespread applications, ranging from home automation to sign language interpretation and clinical operations. The primary challenge lies in achieving real-time recognition while managing temporal dependencies that can impact performance. Existing methods employ 3D convolutional or Transformer-based architectures with hand skeleton estimation, but both have limitations. To address these challenges, a hybrid approach that combines 3D Convolutional Neural Networks (3D-CNNs) and Transformers is proposed. The method involves using a 3D-CNN to compute high-level semantic skeleton embeddings, capturing local spatial and temporal characteristics of hand gestures. A Transformer network with a self-attention mechanism is then employed to efficiently capture long-range temporal dependencies in the skeleton sequence. Evaluation of the Briareo and Multimodal Hand Gesture datasets resulted in accuracy scores of 95.49% and 97.25%, respectively. Notably, this approach achieves real-time performance using a standard CPU, distinguishing it from methods that require specialized GPUs. The hybrid approach's real-time efficiency and high accuracy demonstrate its superiority over existing state-of-the-art methods. In summary, the hybrid 3D-CNN and Transformer approach effectively addresses real-time recognition challenges and efficient handling of temporal dependencies, outperforming existing methods in both accuracy and speed.


Assuntos
Fontes de Energia Elétrica , Gestos , Automação , Redes Neurais de Computação , Esqueleto
6.
Cogn Sci ; 47(8): e13331, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37635624

RESUMO

Silent gesture is not considered to be linguistic, on par with spoken and sign languages. It is claimed that silent gestures, unlike language, represent events holistically, without compositional structure. However, recent research has demonstrated that gesturers use consistent strategies when representing objects and events, and that there are behavioral and clinically relevant limits on what form a gesture may take to effect a particular meaning. This systematicity challenges a holistic interpretation of silent gesture, which predicts that there should be no stable form-meaning correspondence across event representations. Here, we demonstrate to the contrary that untrained gesturers systematically manipulate the form of their gestures when representing events with and without a theme (e.g., Someone popped the balloon vs. Someone walked), that is, transitive and intransitive events. We elicited silent gestures and annotated them for manual features active in coding transitivity distinctions in sign languages. We trained linear support vector machines to make item-by-item transitivity predictions based on these features. Prediction accuracy was good across the entire dataset, thus demonstrating that systematicity in silent gesture can be explained with recourse to subunits. We argue that handshape features are constructs co-opted from cognitive systems subserving manual action production and comprehension for communicative purposes, which may integrate into the linguistic system of emerging sign languages. We further suggest that nonsigners tend to map event participants to each hand, a strategy found across genetically and geographically distinct sign languages, suggesting the strategy's cognitive foundation.


Assuntos
Gestos , Semântica , Humanos , Idioma , Linguística , Língua de Sinais
7.
Int J Geriatr Psychiatry ; 38(8): e5987, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37587608

RESUMO

BACKGROUND: This is a methodological paper that aims to advance the conceptualisation of participatory research by focusing on the value of capturing and understanding movement as a vital means of communication for older people with dementia in a general hospital ward. Qualitative research involving people with dementia tends to be word-based and reliant upon verbal fluency. This article considers a method for capturing and understanding movement as a vital means of communication. METHOD: This narrative enquiry is underpinned by the model of social citizenship that recognises people with dementia as citizens with narratives to share. The study focused on spontaneously produced conversations that were video recorded and analysed through a lens of mobility. This enabled each participant to share what was important to them in that moment of time without always using words. FINDINGS: The study findings showed that people with dementia have narratives to share, but these narratives do not fit the bio-medically constructed model that is generally expected from patients. Utilising a mobilities lens enabled the narratives to be understood as containing layers of language. The first layer is the words; the second layer is gestures and movements that support the words; and the third layer is micro movements. These movements do not only support the words but in some cases tell a different story altogether. CONCLUSION: This methodology brings attention to layers of communication that reveal narratives as a mobile process that require work from both the teller and the listener to share and receive. Movements are shown to be the physical manifestations of embodied language which when viewed through a lens of mobility enable a deeper understanding of the experience of living with dementia when an inpatient. Viewing narratives through a mobilities lens is important to the advancement of dementia and citizenship practices.


Assuntos
Demência , Idioma , Humanos , Idoso , Gestos , Hospitais Gerais , Pacientes Internados
8.
Pediatr Ann ; 52(8): e309-e312, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37561827

RESUMO

A 6-year-old girl was referred to pediatric neurology because of concerns about her behavior. Her mother had noticed episodes in which the girl would wave her hand in front of her face and lose awareness of her surroundings several times per day. These episodes usually occurred when she was outdoors and had caused the child to walk into objects and stop in traffic. The patient otherwise had no neurological deficits or cognitive impairment, and there was no family history of neuropsychiatric disorders. Although the patient was aware of her behavior, she could not explain why she performed these hand-waving motions. A neurological workup revealed that these behaviors were not complex stereotypies but rather a rare and unusual disorder. This case highlights the role of neurology in assessing complex motor behaviors and offers insight into when a practicing pediatrician should consider a neurological workup for complex stereotypies. [Pediatr Ann. 2023;52(8):e309-e312.].


Assuntos
Doenças do Sistema Nervoso , Criança , Feminino , Humanos , Atenção , Conscientização , Gestos , Doenças do Sistema Nervoso/diagnóstico
9.
Nature ; 620(7976): 1037-1046, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37612505

RESUMO

Speech neuroprostheses have the potential to restore communication to people living with paralysis, but naturalistic speed and expressivity are elusive1. Here we use high-density surface recordings of the speech cortex in a clinical-trial participant with severe limb and vocal paralysis to achieve high-performance real-time decoding across three complementary speech-related output modalities: text, speech audio and facial-avatar animation. We trained and evaluated deep-learning models using neural data collected as the participant attempted to silently speak sentences. For text, we demonstrate accurate and rapid large-vocabulary decoding with a median rate of 78 words per minute and median word error rate of 25%. For speech audio, we demonstrate intelligible and rapid speech synthesis and personalization to the participant's pre-injury voice. For facial-avatar animation, we demonstrate the control of virtual orofacial movements for speech and non-speech communicative gestures. The decoders reached high performance with less than two weeks of training. Our findings introduce a multimodal speech-neuroprosthetic approach that has substantial promise to restore full, embodied communication to people living with severe paralysis.


Assuntos
Face , Próteses Neurais , Paralisia , Fala , Humanos , Córtex Cerebral/fisiologia , Córtex Cerebral/fisiopatologia , Ensaios Clínicos como Assunto , Comunicação , Aprendizado Profundo , Gestos , Movimento , Próteses Neurais/normas , Paralisia/fisiopatologia , Paralisia/reabilitação , Vocabulário , Voz
10.
Cognition ; 240: 105581, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37573692

RESUMO

Human communication involves the process of translating intentions into communicative actions. But how exactly do our intentions surface in the visible communicative behavior we display? Here we focus on pointing gestures, a fundamental building block of everyday communication, and investigate whether and how different types of underlying intent modulate the kinematics of the pointing hand and the brain activity preceding the gestural movement. In a dynamic virtual reality environment, participants pointed at a referent to either share attention with their addressee, inform their addressee, or get their addressee to perform an action. Behaviorally, it was observed that these different underlying intentions modulated how long participants kept their arm and finger still, both prior to starting the movement and when keeping their pointing hand in apex position. In early planning stages, a neurophysiological distinction was observed between a gesture that is used to share attitudes and knowledge with another person versus a gesture that mainly uses that person as a means to perform an action. Together, these findings suggest that our intentions influence our actions from the earliest neurophysiological planning stages to the kinematic endpoint of the movement itself.


Assuntos
Gestos , Intenção , Humanos , Dedos , Mãos , Movimento
11.
Sensors (Basel) ; 23(15)2023 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-37571752

RESUMO

This paper describes the preliminary results of measuring the impact of human body movements on plants. The scope of this project is to investigate if a plant perceives human activity in its vicinity. In particular, we analyze the influence of eurythmic gestures of human actors on lettuce and beans. In an eight-week experiment, we exposed rows of lettuce and beans to weekly eurythmic movements (similar to Qi Gong) of a eurythmist, while at the same time measuring changes in voltage between the roots and leaves of lettuce and beans using the plant spikerbox. We compared this experimental group of vegetables to a control group of vegetables whose voltage differential was also measured while not being exposed to eurythmy. We placed a plant spikerbox connected to lettuce or beans in the vegetable plot while the eurythmist was performing their gestures about 2 m away; a second spikerbox was connected to a control plant 20 m away. Using t-tests, we found a clear difference between the experimental and the control group, which was also verified with a machine learning model. In other words, the vegetables showed a noticeably different pattern in electric potentials in response to eurythmic gestures.


Assuntos
Técnicas Biossensoriais , Gestos , Humanos , Verduras , Alface , Plantas , Folhas de Planta
12.
Sci Rep ; 13(1): 13164, 2023 08 13.
Artigo em Inglês | MEDLINE | ID: mdl-37574499

RESUMO

Similarly to humans, rhesus macaques engage in mother-infant face-to-face interactions. However, no previous studies have described the naturally occurring structure and development of mother-infant interactions in this population and used a comparative-developmental perspective to directly compare them to the ones reported in humans. Here, we investigate the development of infant communication, and maternal responsiveness in the two groups. We video-recorded mother-infant interactions in both groups in naturalistic settings and analysed them with the same micro-analytic coding scheme. Results show that infant social expressiveness and maternal responsiveness are similarly structured in humans and macaques. Both human and macaque mothers use specific mirroring responses to specific infant social behaviours (modified mirroring to communicative signals, enriched mirroring to affiliative gestures). However, important differences were identified in the development of infant social expressiveness, and in forms of maternal responsiveness, with vocal responses and marking behaviours being predominantly human. Results indicate a common functional architecture of mother-infant communication in humans and monkeys, and contribute to theories concerning the evolution of specific traits of human behaviour.


Assuntos
Relações Mãe-Filho , Mães , Feminino , Animais , Humanos , Lactente , Macaca mulatta , Comportamento Social , Gestos
13.
Appl Ergon ; 113: 104082, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37418909

RESUMO

In high-risk environments, fast and accurate responses to warning systems are essential to efficiently handle emergency situations. The aim of the present study was twofold: 1) investigating whether hand action videos (i.e., gesture alarms) trigger faster and more accurate responses than text alarm messages (i.e., written alarms), especially when mental workload (MWL) is high; and 2) investigating the brain activity in response to both types of alarms as a function of MWL. Regardless of MWL, participants (N = 28) were found to be both faster and more accurate when responding to gesture alarms than to written alarms. Brain electrophysiological results suggest that this greater efficiency might be due to a facilitation of the action execution, reflected by the decrease in mu and beta power observed around the response time window observed at C3 and C4 electrodes. These results suggest that gesture alarms may improve operators' performances in emergency situations.


Assuntos
Alarmes Clínicos , Gestos , Humanos , Tempo de Reação , Carga de Trabalho
14.
Sensors (Basel) ; 23(10)2023 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-37430853

RESUMO

Wearable surface electromyography (sEMG) signal-acquisition devices have considerable potential for medical applications. Signals obtained from sEMG armbands can be used to identify a person's intentions using machine learning. However, the performance and recognition capabilities of commercially available sEMG armbands are generally limited. This paper presents the design of a wireless high-performance sEMG armband (hereinafter referred to as the α Armband), which has 16 channels and a 16-bit analog-to-digital converter and can reach 2000 samples per second per channel (adjustable) with a bandwidth of 0.1-20 kHz (adjustable). The α Armband can configure parameters and interact with sEMG data through low-power Bluetooth. We collected sEMG data from the forearms of 30 subjects using the α Armband and extracted three different image samples from the time-frequency domain for training and testing convolutional neural networks. The average recognition accuracy for 10 hand gestures was as high as 98.6%, indicating that the α Armband is highly practical and robust, with excellent development potential.


Assuntos
Antebraço , Gestos , Humanos , Eletromiografia , Intenção , Aprendizado de Máquina
15.
Sci Rep ; 13(1): 11000, 2023 Jul 07.
Artigo em Inglês | MEDLINE | ID: mdl-37419881

RESUMO

Designing efficient and labor-saving prosthetic hands requires powerful hand gesture recognition algorithms that can achieve high accuracy with limited complexity and latency. In this context, the paper proposes a Compact Transformer-based Hand Gesture Recognition framework referred to as [Formula: see text], which employs a vision transformer network to conduct hand gesture recognition using high-density surface EMG (HD-sEMG) signals. Taking advantage of the attention mechanism, which is incorporated into the transformer architectures, our proposed [Formula: see text] framework overcomes major constraints associated with most of the existing deep learning models such as model complexity; requiring feature engineering; inability to consider both temporal and spatial information of HD-sEMG signals, and requiring a large number of training samples. The attention mechanism in the proposed model identifies similarities among different data segments with a greater capacity for parallel computations and addresses the memory limitation problems while dealing with inputs of large sequence lengths. [Formula: see text] can be trained from scratch without any need for transfer learning and can simultaneously extract both temporal and spatial features of HD-sEMG data. Additionally, the [Formula: see text] framework can perform instantaneous recognition using sEMG image spatially composed from HD-sEMG signals. A variant of the [Formula: see text] is also designed to incorporate microscopic neural drive information in the form of Motor Unit Spike Trains (MUSTs) extracted from HD-sEMG signals using Blind Source Separation (BSS). This variant is combined with its baseline version via a hybrid architecture to evaluate potentials of fusing macroscopic and microscopic neural drive information. The utilized HD-sEMG dataset involves 128 electrodes that collect the signals related to 65 isometric hand gestures of 20 subjects. The proposed [Formula: see text] framework is applied to 31.25, 62.5, 125, 250 ms window sizes of the above-mentioned dataset utilizing 32, 64, 128 electrode channels. Our results are obtained via 5-fold cross-validation by first applying the proposed framework on the dataset of each subject separately and then, averaging the accuracies among all the subjects. The average accuracy over all the participants using 32 electrodes and a window size of 31.25 ms is 86.23%, which gradually increases till reaching 91.98% for 128 electrodes and a window size of 250 ms. The [Formula: see text] achieves accuracy of 89.13% for instantaneous recognition based on a single frame of HD-sEMG image. The proposed model is statistically compared with a 3D Convolutional Neural Network (CNN) and two different variants of Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) models. The accuracy results for each of the above-mentioned models are paired with their precision, recall, F1 score, required memory, and train/test times. The results corroborate effectiveness of the proposed [Formula: see text] framework compared to its counterparts.


Assuntos
Gestos , Redes Neurais de Computação , Humanos , Algoritmos , Eletromiografia/métodos , Reconhecimento Psicológico , Mãos
16.
Sensors (Basel) ; 23(12)2023 Jun 09.
Artigo em Inglês | MEDLINE | ID: mdl-37420629

RESUMO

Gesture recognition is a mechanism by which a system recognizes an expressive and purposeful action made by a user's body. Hand-gesture recognition (HGR) is a staple piece of gesture-recognition literature and has been keenly researched over the past 40 years. Over this time, HGR solutions have varied in medium, method, and application. Modern developments in the areas of machine perception have seen the rise of single-camera, skeletal model, hand-gesture identification algorithms, such as media pipe hands (MPH). This paper evaluates the applicability of these modern HGR algorithms within the context of alternative control. Specifically, this is achieved through the development of an HGR-based alternative-control system capable of controlling of a quad-rotor drone. The technical importance of this paper stems from the results produced during the novel and clinically sound evaluation of MPH, alongside the investigatory framework used to develop the final HGR algorithm. The evaluation of MPH highlighted the Z-axis instability of its modelling system which reduced the landmark accuracy of its output from 86.7% to 41.5%. The selection of an appropriate classifier complimented the computationally lightweight nature of MPH whilst compensating for its instability, achieving a classification accuracy of 96.25% for eight single-hand static gestures. The success of the developed HGR algorithm ensured that the proposed alternative-control system could facilitate intuitive, computationally inexpensive, and repeatable drone control without requiring specialised equipment.


Assuntos
Gestos , Dispositivos Aéreos não Tripulados , Mãos , Algoritmos
17.
Sensors (Basel) ; 23(12)2023 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-37420722

RESUMO

Hand gesture recognition (HGR) is a crucial area of research that enhances communication by overcoming language barriers and facilitating human-computer interaction. Although previous works in HGR have employed deep neural networks, they fail to encode the orientation and position of the hand in the image. To address this issue, this paper proposes HGR-ViT, a Vision Transformer (ViT) model with an attention mechanism for hand gesture recognition. Given a hand gesture image, it is first split into fixed size patches. Positional embedding is added to these embeddings to form learnable vectors that capture the positional information of the hand patches. The resulting sequence of vectors are then served as the input to a standard Transformer encoder to obtain the hand gesture representation. A multilayer perceptron head is added to the output of the encoder to classify the hand gesture to the correct class. The proposed HGR-ViT obtains an accuracy of 99.98%, 99.36% and 99.85% for the American Sign Language (ASL) dataset, ASL with Digits dataset, and National University of Singapore (NUS) hand gesture dataset, respectively.


Assuntos
Gestos , Reconhecimento Automatizado de Padrão , Humanos , Reconhecimento Automatizado de Padrão/métodos , Redes Neurais de Computação , Extremidade Superior , Língua de Sinais , Mãos
18.
Biosci Trends ; 17(3): 219-229, 2023 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-37394614

RESUMO

With the development of deep learning technology, gesture recognition based on surface electromyography (EMG) signals has shown broad application prospects in various human-computer interaction fields. Most current gesture recognition technologies can achieve high recognition accuracy on a wide range of gesture actions. However, in practical applications, gesture recognition based on surface EMG signals is susceptible to interference from irrelevant gesture movements, which affects the accuracy and security of the system. Therefore, it is crucial to design an irrelevant gesture recognition method. This paper introduces the GANomaly network from the field of image anomaly detection into surface EMG-based irrelevant gesture recognition. The network has a small feature reconstruction error for target samples and a large feature reconstruction error for irrelevant samples. By comparing the relationship between the feature reconstruction error and the predefined threshold, we can determine whether the input samples are from the target category or the irrelevant category. In order to improve the performance of EMG irrelevant gesture recognition, this paper proposes a feature reconstruction network named EMG-FRNet for EMG irrelevant gesture recognition. This network is based on GANomaly and incorporates structures such as channel cropping (CC), cross-layer encoding-decoding feature fusion (CLEDFF), and SE channel attention (SE). In this paper, Ninapro DB1, Ninapro DB5 and self-collected datasets were used to verify the performance of the proposed model. The Area Under the receiver operating characteristic Curve (AUC) values of EMG-FRNet on the above three datasets were 0.940, 0.926 and 0.962, respectively. Experimental results demonstrate that the proposed model achieves the highest accuracy among related research.


Assuntos
Gestos , Movimento , Humanos , Eletromiografia/métodos , Curva ROC , Algoritmos
19.
Artigo em Inglês | MEDLINE | ID: mdl-37418414

RESUMO

Surface electromyography (sEMG) based gesture recognition has received broad attention and application in rehabilitation areas for its direct and fine-grained sensing ability. sEMG signals exhibit strong user dependence properties among users with different physiology, causing the inapplicability of the recognition model on new users. Domain adaptation is the most representative method to reduce the user gap with feature decoupling to acquire motion-related features. However, the existing domain adaptation method shows awful decoupling results when handling complex time-series physiological signals. Therefore, this paper proposes an Iterative Self-Training based Domain Adaptation method (STDA) to supervise the feature decoupling process with the pseudo-label generated by self-training and to explore cross-user sEMG gesture recognition. STDA mainly consists of two parts, discrepancy-based domain adaptation (DDA) and pseudo-label iterative update (PIU). DDA aligns existing users' data and new users' unlabeled data with a Gaussian kernel-based distance constraint. PIU Iteratively continuously updates pseudo-labels to generate more accurate labelled data on new users with category balance. Detailed experiments are performed on publicly available benchmark datasets, including the NinaPro dataset (DB-1 and DB-5) and the CapgMyo dataset (DB-a, DB-b, and DB-c). Experimental results show that the proposed method achieves significant performance improvement compared with existing sEMG gesture recognition and domain adaption methods.


Assuntos
Algoritmos , Gestos , Humanos , Eletromiografia/métodos , Reconhecimento Psicológico , Atenção
20.
Methods ; 218: 39-47, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37479003

RESUMO

CONTEXT: Surface electromyography (sEMG) signals contain rich information recorded from muscle movements and therefore reflect the user's intention. sEMG has seen dominant applications in rehabilitation, clinical diagnosis as well as human engineering, etc. However, current feature extraction methods for sEMG signals have been seriously limited by their stochasticity, transiency, and non-stationarity. OBJECTIVE: Our objective is to combat the difficulties induced by the aforementioned downsides of sEMG and thereby extract representative features for various downstream movement recognition. METHOD: We propose a novel 3-axis view of sEMG features composed of temporal, spatial, and channel-wise summary. We leverage the state-of-the-art architecture Transformer to enforce efficient parallel search and to get rid of limitations imposed by previous work in gesture classification. The transformer model is designed on top of an attention-based module, which allows for the extraction of global contextual relevance among channels and the use of this relevance for sEMG recognition. RESULTS: We compared the proposed method against existing methods on two Ninapro datasets consisting of data from both healthy people and amputees. Experimental results show the proposed method attains the state-of-the-art (SOTA) accuracy on both datasets. We further show that the proposed method enjoys strong generalization ability: a new SOTA is achieved by pretraining the model on a different dataset followed by fine-tuning it on the target dataset.


Assuntos
Algoritmos , Gestos , Humanos , Eletromiografia/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...