Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 1.220
Filter
1.
Cogn Sci ; 48(9): e13484, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39228272

ABSTRACT

When people talk about kinship systems, they often use co-speech gestures and other representations to elaborate. This paper investigates such polysemiotic (spoken, gestured, and drawn) descriptions of kinship relations, to see if they display recurring patterns of conventionalization that capture specific social structures. We present an exploratory hypothesis-generating study of descriptions produced by a lesser-known ethnolinguistic community to the cognitive sciences: the Paamese people of Vanuatu. Forty Paamese speakers were asked to talk about their family in semi-guided kinship interviews. Analyses of the speech, gesture, and drawings produced during these interviews revealed that lineality (i.e., mother's side vs. father's side) is lateralized in the speaker's gesture space. In other words, kinship members of the speaker's matriline are placed on the left side of the speaker's body and those of the patriline are placed on their right side, when they are mentioned in speech. Moreover, we find that the gesture produced by Paamese participants during verbal descriptions of marital relations are performed significantly more often on two diagonal directions of the sagittal axis. We show that these diagonals are also found in the few diagrams that participants drew on the ground to augment their verbo-gestural descriptions of marriage practices with drawing. We interpret this behavior as evidence of a spatial template, which Paamese speakers activate to think and communicate about family relations. We therefore argue that extending investigations of kinship structures beyond kinship terminologies alone can unveil additional key factors that shape kinship cognition and communication and hereby provide further insights into the diversity of social structures.


Subject(s)
Cognition , Communication , Family , Gestures , Humans , Male , Female , Family/psychology , Adult , Speech , Middle Aged
2.
Sensors (Basel) ; 24(15)2024 Jul 25.
Article in English | MEDLINE | ID: mdl-39123896

ABSTRACT

For successful human-robot collaboration, it is crucial to establish and sustain quality interaction between humans and robots, making it essential to facilitate human-robot interaction (HRI) effectively. The evolution of robot intelligence now enables robots to take a proactive role in initiating and sustaining HRI, thereby allowing humans to concentrate more on their primary tasks. In this paper, we introduce a system known as the Robot-Facilitated Interaction System (RFIS), where mobile robots are employed to perform identification, tracking, re-identification, and gesture recognition in an integrated framework to ensure anytime readiness for HRI. We implemented the RFIS on an autonomous mobile robot used for transporting a patient, to demonstrate proactive, real-time, and user-friendly interaction with a caretaker involved in monitoring and nursing the patient. In the implementation, we focused on the efficient and robust integration of various interaction facilitation modules within a real-time HRI system that operates in an edge computing environment. Experimental results show that the RFIS, as a comprehensive system integrating caretaker recognition, tracking, re-identification, and gesture recognition, can provide an overall high quality of interaction in HRI facilitation with average accuracies exceeding 90% during real-time operations at 5 FPS.


Subject(s)
Gestures , Robotics , Robotics/methods , Humans , Pattern Recognition, Automated/methods , Algorithms , Artificial Intelligence
3.
Sensors (Basel) ; 24(15)2024 Aug 04.
Article in English | MEDLINE | ID: mdl-39124090

ABSTRACT

Human-Machine Interfaces (HMIs) have gained popularity as they allow for an effortless and natural interaction between the user and the machine by processing information gathered from a single or multiple sensing modalities and transcribing user intentions to the desired actions. Their operability depends on frequent periodic re-calibration using newly acquired data due to their adaptation needs in dynamic environments, where test-time data continuously change in unforeseen ways, a cause that significantly contributes to their abandonment and remains unexplored by the Ultrasound-based (US-based) HMI community. In this work, we conduct a thorough investigation of Unsupervised Domain Adaptation (UDA) algorithms for the re-calibration of US-based HMIs during within-day sessions, which utilize unlabeled data for re-calibration. Our experimentation led us to the proposal of a CNN-based architecture for simultaneous wrist rotation angle and finger gesture prediction that achieves comparable performance with the state-of-the-art while featuring 87.92% less trainable parameters. According to our findings, DANN (a Domain-Adversarial training algorithm), with proper initialization, offers an average 24.99% classification accuracy performance enhancement when compared to no re-calibration setting. However, our results suggest that in cases where the experimental setup and the UDA configuration may differ, observed enhancements would be rather small or even unnoticeable.


Subject(s)
Algorithms , Ultrasonography , Humans , Ultrasonography/methods , User-Computer Interface , Wrist/physiology , Wrist/diagnostic imaging , Neural Networks, Computer , Fingers/physiology , Man-Machine Systems , Gestures
4.
ACS Appl Mater Interfaces ; 16(32): 42242-42253, 2024 Aug 14.
Article in English | MEDLINE | ID: mdl-39102499

ABSTRACT

A multiple self-powered sensor-integrated mobile manipulator (MSIMM) system was proposed to address challenges in existing exploration devices, such as the need for a constant energy supply, limited variety of sensed information, and difficult human-computer interfaces. The MSIMM system integrates triboelectric nanogenerator (TENG)-based self-powered sensors, a bionic manipulator, and wireless gesture control, enhancing sensor data usability through machine learning. Specifically, the system includes a tracked vehicle platform carrying the manipulator and electronics, including a storage battery and a microcontroller unit (MCU). An integrated sensor glove and terminal application (APP) enable intuitive manipulator control, improving human-computer interaction. The system responds to and analyzes various environmental stimuli, including the droplet and fall height, temperature, pressure, material type, angles, angular velocity direction, and acceleration amplitude and direction. The manipulator, fabricated using 3D printing technology, integrates multiple sensors that generate electrical signals through the triboelectric effect of mechanical motion. These signals are classified using convolutional neural networks for accurate environmental monitoring. Our database shows signal recognition and classification accuracy exceeding 94%, with specific accuracies of 100% for pressure sensors, 99.55% for angle sensors, and 98.66, 95.91, 96.27, and 94.13% for material, droplet, temperature, and acceleration sensors, respectively.

5.
Sci Rep ; 14(1): 20247, 2024 08 30.
Article in English | MEDLINE | ID: mdl-39215011

ABSTRACT

Long-term electroencephalography (EEG) recordings have primarily been used to study resting-state fluctuations. These recordings provide valuable insights into various phenomena such as sleep stages, cognitive processes, and neurological disorders. However, this study explores a new angle, focusing for the first time on the evolving nature of EEG dynamics over time within the context of movement. Twenty-two healthy individuals were measured six times from 2 p.m. to 12 a.m. with intervals of 2 h while performing four right-hand gestures. Analysis of movement-related cortical potentials (MRCPs) revealed a reduction in amplitude for the motor and post-motor potential during later hours of the day. Evaluation in source space displayed an increase in the activity of M1 of the contralateral hemisphere and the SMA of both hemispheres until 8 p.m. followed by a decline until midnight. Furthermore, we investigated how changes over time in MRCP dynamics affect the ability to decode motor information. This was achieved by developing classification schemes to assess performance across different scenarios. The observed variations in classification accuracies over time strongly indicate the need for adaptive decoders. Such adaptive decoders would be instrumental in delivering robust results, essential for the practical application of BCIs during day and nighttime usage.


Subject(s)
Electroencephalography , Gestures , Hand , Humans , Electroencephalography/methods , Male , Female , Hand/physiology , Adult , Young Adult , Movement/physiology , Motor Cortex/physiology , Brain-Computer Interfaces
6.
Front Bioeng Biotechnol ; 12: 1401803, 2024.
Article in English | MEDLINE | ID: mdl-39144478

ABSTRACT

Introduction: Hand gestures are an effective communication tool that may convey a wealth of information in a variety of sectors, including medical and education. E-learning has grown significantly in the last several years and is now an essential resource for many businesses. Still, there has not been much research conducted on the use of hand gestures in e-learning. Similar to this, gestures are frequently used by medical professionals to help with diagnosis and treatment. Method: We aim to improve the way instructors, students, and medical professionals receive information by introducing a dynamic method for hand gesture monitoring and recognition. Six modules make up our approach: video-to-frame conversion, preprocessing for quality enhancement, hand skeleton mapping with single shot multibox detector (SSMD) tracking, hand detection using background modeling and convolutional neural network (CNN) bounding box technique, feature extraction using point-based and full-hand coverage techniques, and optimization using a population-based incremental learning algorithm. Next, a 1D CNN classifier is used to identify hand motions. Results: After a lot of trial and error, we were able to obtain a hand tracking accuracy of 83.71% and 85.71% over the Indian Sign Language and WLASL datasets, respectively. Our findings show how well our method works to recognize hand motions. Discussion: Teachers, students, and medical professionals can all efficiently transmit and comprehend information by utilizing our suggested system. The obtained accuracy rates highlight how our method might improve communication and make information exchange easier in various domains.

7.
Philos Trans R Soc Lond B Biol Sci ; 379(1911): 20230156, 2024 Oct 07.
Article in English | MEDLINE | ID: mdl-39155717

ABSTRACT

The gestures we produce serve a variety of functions-they affect our communication, guide our attention and help us think and change the way we think. Gestures can consequently also help us learn, generalize what we learn and retain that knowledge over time. The effects of gesture-based instruction in mathematics have been well studied. However, few of these studies are directly applicable to classroom environments. Here, we review literature that highlights the benefits of producing and observing gestures when teaching and learning mathematics, and we provide suggestions for designing research studies with an eye towards how gestures can feasibly be applied to classroom learning. This article is part of the theme issue 'Minds in movement: embodied cognition in the age of artificial intelligence'.


Subject(s)
Gestures , Learning , Mathematics , Humans , Child , Mathematics/education , Teaching , School Teachers/psychology , Cognition , Schools
8.
Top Cogn Sci ; 2024 Aug 27.
Article in English | MEDLINE | ID: mdl-39190828

ABSTRACT

Languages are neither designed in classrooms nor drawn from dictionaries-they are products of human minds and human interactions. However, it is challenging to understand how structure grows in these circumstances because generations of use and transmission shape and reshape the structure of the languages themselves. Laboratory studies on language emergence investigate the origins of language structure by requiring participants, prevented from using their own natural language(s), to create a novel communication system and then transmit it to others. Because the participants in these lab studies are already speakers of a language, it is easy to question the relevance of lab-based findings to the creation of natural language systems. Here, we take the findings from a lab-based language emergence paradigm and test whether the same pattern is also found in a new natural language: Nicaraguan Sign Language. We find evidence that signers of Nicaraguan Sign Language may show the same biases seen in lab-based language emergence studies: (1) they appear to condition word order based on the semantic dimension of intensionality and extensionality, and (2) they adjust this conditioning to satisfy language-internal order constraints. Our study adds to the small, but growing literature testing the relevance of lab-based studies to natural language birth, and provides convincing evidence that the biases seen in the lab play a role in shaping a brand new language.

9.
Bioengineering (Basel) ; 11(8)2024 Aug 09.
Article in English | MEDLINE | ID: mdl-39199769

ABSTRACT

Surface electromyography (sEMG) is commonly used as an interface in human-machine interaction systems due to their high signal-to-noise ratio and easy acquisition. It can intuitively reflect motion intentions of users, thus is widely applied in gesture recognition systems. However, wearable sEMG-based gesture recognition systems are susceptible to changes in environmental noise, electrode placement, and physiological characteristics. This could result in significant performance degradation of the model in inter-session scenarios, bringing a poor experience to users. Currently, for noise from environmental changes and electrode shifting from wearing variety, numerous studies have proposed various data-augmentation methods and highly generalized networks to improve inter-session gesture recognition accuracy. However, few studies have considered the impact of individual physiological states. In this study, we assumed that user exercise could cause changes in muscle conditions, leading to variations in sEMG features and subsequently affecting the recognition accuracy of model. To verify our hypothesis, we collected sEMG data from 12 participants performing the same gesture tasks before and after exercise, and then used Linear Discriminant Analysis (LDA) for gesture classification. For the non-exercise group, the inter-session accuracy declined only by 2.86%, whereas that of the exercise group decreased by 13.53%. This finding proves that exercise is indeed a critical factor contributing to the decline in inter-session model performance.

10.
ACS Sens ; 2024 Aug 28.
Article in English | MEDLINE | ID: mdl-39193764

ABSTRACT

Conductive hydrogel is considered to be one of the most potential sensing materials for wearable strain sensors. However, both the hydrophilicity of polymer chains and high water content severely inhibit the potential applications of hydrogel-based sensors in extreme conditions. In this study, a multicross-linked hydrogel was prepared by simultaneously introducing a double-network matrix, multiple conductive fillers, and free-moving ions, which can withstand an ultralow temperature below -80 °C. A superhydrophobic Ecoflex layer with a water contact angle of 159.1° was coated on the hydrogel using simple spraying and laser engraving methods. Additionally, the smart glove integrating five hydrogel strain sensors with a microprocessor was developed to recognize 12 types of diving gestures and synchronously transmit recognition results to smartphones. The superhydrophobic and antifreezing hydrogel strain sensor proposed in this study emerges promising potentials in wearable electronics, human-machine interfaces, and underwater applications.

12.
Front Psychol ; 15: 1429232, 2024.
Article in English | MEDLINE | ID: mdl-39035091

ABSTRACT

Previous research has argued that consecutive interpreters constitute laminated speakers in the sense that they engage with different kinds of footing at once, representing another's point of view through their words in another language. These multiple roles also play out in their gesturing, as they sometimes indicate deictically who is the source of the ideas and stances they are expressing (the principal). Simultaneous interpreters, though, often work in an interpreting booth; they are often not seen by the audience, yet many of them gesture, sometimes frequently. How are simultaneous interpreters using gesture in relation to stance-taking and footing? We consider the case of simultaneous interpreters rendering popular science lectures between (both to and from) Russian (their L1) and either English or German (their L2). Though only hearing the audio of the lectures, the interpreters produced many gestures, which were analyzed for their function. Some representational and deictic gestures appeared to clearly involve the interpreter as the principal (writing numbers with one's finger to help remember them or pointing to two places on the desk to keep track of two different quantities mentioned). Other representational and deictic gestures are ambiguous as to whether they are enacting what the interpreter may have imagined what the lecturer did or whether they arose out of the interpreter's own thinking for speaking (e.g., tracing the form of a bird being mentioned or pointing to an empty space when the lecturer was referring to a graph). Pragmatic gestures, showing one's stance toward the topic of the talk, were the most ambiguous as to the footing, reflecting how the interpreter may be engaged in fictive interaction with their imagined audience. Self-adapters, however, more clearly involve the interpreter as the principal, as such actions are known to support cognitive focussing and self-soothing. In sum, we see varying degrees of clarity as to whose stance and principal footing simultaneous interpreters are expressing bodily as laminated speakers. The variable ambiguity can be attributed to the nature of gesture as a semiotic system, the functions of which are more often dependent on co-occurring speech than vice versa.

13.
Brain Struct Funct ; 2024 Jul 17.
Article in English | MEDLINE | ID: mdl-39014269

ABSTRACT

Limb apraxia is a higher-order motor disorder often occurring post-stroke, which affects skilled actions. It is assessed through tasks involving gesture production or pantomime, recognition, meaningless gesture imitation, complex figure drawing, single and multi-object use. A two-system model for the organisation of actions hypothesizes distinct pathways mediating praxis deficits via conceptual, 'indirect', and perceptual 'direct' routes to action. Traditional lesion- symptom mapping techniques have failed to identify these distinct routes. We assessed 29 left hemisphere stroke patients to investigate white matter disconnections on deficits of praxis tasks from the Birmingham Cognitive Screening. White matter disconnection maps derived from patients' structural T1 lesions were created using a diffusion-weighted healthy participant dataset acquired from the human connectome project (HCP). Initial group-level regression analyses revealed significant disconnection between occipital lobes via the splenium of the corpus callosum and involvement of the inferior longitudinal fasciculus in meaningless gesture imitation deficits. There was a trend of left fornix disconnection in gesture production deficits. Further, voxel-wise Bayesian Crawford single-case analyses performed on two patients with the most severe meaningless gesture imitation and meaningful gesture production deficits, respectively, confirmed distinct posterior interhemispheric disconnection, for the former, and disconnections between temporal and frontal areas via the fornix, rostrum of the corpus callosum and anterior cingulum, for the latter. Our results suggest distinct pathways associated with perceptual and conceptual deficits akin to 'direct' and 'indirect' action routes, with some patients displaying both. Larger studies are needed to validate and elaborate on these findings, advancing our understanding of limb apraxia.

14.
Sensors (Basel) ; 24(13)2024 Jun 28.
Article in English | MEDLINE | ID: mdl-39000981

ABSTRACT

This work presents a novel approach for elbow gesture recognition using an array of inductive sensors and a machine learning algorithm (MLA). This paper describes the design of the inductive sensor array integrated into a flexible and wearable sleeve. The sensor array consists of coils sewn onto the sleeve, which form an LC tank circuit along with the externally connected inductors and capacitors. Changes in the elbow position modulate the inductance of these coils, allowing the sensor array to capture a range of elbow movements. The signal processing and random forest MLA to recognize 10 different elbow gestures are described. Rigorous evaluation on 8 subjects and data augmentation, which leveraged the dataset to 1270 trials per gesture, enabled the system to achieve remarkable accuracy of 98.3% and 98.5% using 5-fold cross-validation and leave-one-subject-out cross-validation, respectively. The test performance was then assessed using data collected from five new subjects. The high classification accuracy of 94% demonstrates the generalizability of the designed system. The proposed solution addresses the limitations of existing elbow gesture recognition designs and offers a practical and effective approach for intuitive human-machine interaction.


Subject(s)
Algorithms , Elbow , Gestures , Machine Learning , Humans , Elbow/physiology , Wearable Electronic Devices , Pattern Recognition, Automated/methods , Signal Processing, Computer-Assisted , Male , Adult , Female
15.
Front Psychol ; 15: 1386187, 2024.
Article in English | MEDLINE | ID: mdl-39027047

ABSTRACT

Introduction: Hand gestures and actions-with-objects (hereafter 'actions') are both forms of movement that can promote learning. However, the two have unique affordances, which means that they have the potential to promote learning in different ways. Here we compare how children learn, and importantly retain, information after performing gestures, actions, or a combination of the two during instruction about mathematical equivalence. We also ask whether individual differences in children's understanding of mathematical equivalence (as assessed by spontaneous gesture before instruction) impacts the effects of gesture- and action-based instruction. Method: Across two studies, racially and ethnically diverse third and fourth-grade students (N=142) were given instruction about how to solve mathematical equivalence problems (eg., 2+9+4=__+4) as part of a pretest-training-posttest design. In Study 1, instruction involved teaching students to produce either actions or gestures. In Study 2, instruction involved teaching students to produce either actions followed by gestures or gestures followed by actions. Across both studies, speech and gesture produced during pretest explanations were coded and analyzed to measure individual differences in pretest understanding. Children completed written posttests immediately after instruction, as well as the following day, and four weeks later, to assess learning, generalization and retention. Results: In Study 1 we find that, regardless of individual differences in pre-test understanding of mathematical equivalence, children learn from both action and gesture, but gesture-based instruction promotes retention better than action-based instruction. In Study 2 we find an influence of individual differences: children who produced relatively few types of problem-solving strategies (as assessed by their pre-test gestures and speech) perform better when they receive action training before gesture training than when they receive gesture training first. In contrast, children who expressed many types of strategies, and thus had a more complex understanding of mathematical equivalence prior to instruction, performed equally with both orders. Discussion: These results demonstrate that action training, followed by gesture, can be a useful stepping-stone in the initial stages of learning mathematical equivalence, and that gesture training can help learners retain what they learn.

16.
Percept Mot Skills ; : 315125241266645, 2024 Jul 20.
Article in English | MEDLINE | ID: mdl-39033337

ABSTRACT

Coaches often use pointing gestures alongside their speech to reinforce their message and emphasize important concepts during instructional communications, but the impact of simultaneous pointing gestures and speech on learners' recall remains unclear. We used eye-tracking and recalled performance to investigate the impact of a coach's variously timed pointing gestures and speech on two groups of learners' (novices and experts) visual attention and recall of tactical instructions. Participants were 96 basketball players (48 novice and 48 expert) who attempted to recall instructions about the evolution of a basketball game system under two teaching conditions: speech accompanied by gestures and speech followed by gestures. Overall, the results showed that novice players benefited more from instructional speech accompanied by gestures than from speech followed by gestures alone. This was evidenced by their greater visual attention to the diagrams, demonstrated through a higher fixation count and decreased saccadic shifts between the coach and the diagrams. Additionally, they exhibited improved recall and experienced reduced mental effort, despite having the same fixation time on the diagrams and equivalent recall time. Conversely, experts benefited more from instructional speech followed by gestures, indicating an expertise reversal effect. These results suggest that coaches and educators may improve their tactical instructions by timing the pairing of their hand gestures and speech in relation to the learner's level of expertise.

17.
ACS Sens ; 9(8): 4216-4226, 2024 Aug 23.
Article in English | MEDLINE | ID: mdl-39068608

ABSTRACT

Thermoelectric (TE) hydrogels, mimicking human skin, possessing temperature and strain sensing capabilities, are well-suited for human-machine interaction interfaces and wearable devices. In this study, a TE hydrogel with high toughness and temperature responsiveness was created using the Hofmeister effect and TE current effect, achieved through the cross-linking of PVA/PAA/carboxymethyl cellulose triple networks. The Hofmeister effect, facilitated by Na+ and SO42- ions coordination, notably increased the hydrogel's tensile strength (800 kPa). Introduction of Fe2+/Fe3+ as redox pairs conferred a high Seebeck coefficient (2.3 mV K-1), thereby enhancing temperature responsiveness. Using this dual-responsive sensor, successful demonstration of a feedback mechanism combining deep learning with a robotic hand was accomplished (with a recognition accuracy of 95.30%), alongside temperature warnings at various levels. It is expected to replace manual work through the control of the manipulator in some high-temperature and high-risk scenarios, thereby improving the safety factor, underscoring the vast potential of TE hydrogel sensors in motion monitoring and human-machine interaction applications.


Subject(s)
Deep Learning , Hydrogels , Temperature , Wearable Electronic Devices , Humans , Hydrogels/chemistry , Acrylic Resins/chemistry , Carboxymethylcellulose Sodium/chemistry , Polyvinyl Alcohol/chemistry , Tensile Strength , Robotics
18.
Int J Biol Macromol ; 276(Pt 1): 133802, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38992552

ABSTRACT

Pursuing high-performance conductive hydrogels is still hot topic in development of advanced flexible wearable devices. Herein, a tough, self-healing, adhesive double network (DN) conductive hydrogel (named as OSA-(Gelatin/PAM)-Ca, O-(G/P)-Ca) was prepared by bridging gelatin and polyacrylamide network with functionalized polysaccharide (oxidized sodium alginate, OSA) through Schiff base reaction. Thanks to the presence of multiple interactions (Schiff base bond, hydrogen bond, and metal coordination) within the network, the prepared hydrogel showed outstanding mechanical properties (tensile strain of 2800 % and stress of 630 kPa), high conductivity (0.72 S/m), repeatable adhesion performance and excellent self-healing ability (83.6 %/79.0 % of the original tensile strain/stress after self-healing). Moreover, the hydrogel-based sensor exhibited high strain sensitivity (GF = 3.66) and fast response time (<0.5 s), which can be used to monitor a wide range of human physiological signals. Based on this, excellent compression sensitivity (GF = 0.41 kPa-1 in the range of 90-120 kPa), a three-dimensional (3D) array of flexible sensor was designed to monitor the intensity of pressure and spatial force distribution. In addition, a gel-based wearable sensor was accurately classified and recognized ten types of gestures, achieving an accuracy rate of >96.33 % both before and after self-healing under three machine learning models (the decision tree, SVM, and KNN). This paper provides a simple method to prepare tough and self-healing conductive hydrogel as flexible multifunctional sensor devices for versatile applications in fields such as healthcare monitoring, human-computer interaction, and artificial intelligence.


Subject(s)
Acrylic Resins , Alginates , Electric Conductivity , Gelatin , Hydrogels , Wearable Electronic Devices , Alginates/chemistry , Acrylic Resins/chemistry , Hydrogels/chemistry , Gelatin/chemistry , Humans , Oxidation-Reduction , Adhesives/chemistry , Tensile Strength , Biosensing Techniques/methods
19.
Front Neurosci ; 18: 1306047, 2024.
Article in English | MEDLINE | ID: mdl-39050666

ABSTRACT

The surface electromyographic (sEMG) signals reflect human motor intention and can be utilized for human-machine interfaces (HMI). Comparing to the sparse multi-channel (SMC) electrodes, the high-density (HD) electrodes have a large number of electrodes and compact space between electrodes, which can achieve more sEMG information and have the potential to achieve higher performance in myocontrol. However, when the HD electrodes grid shift or damage, it will affect gesture recognition and reduce recognition accuracy. To minimize the impact resulting from the electrodes shift and damage, we proposed an attention deep fast convolutional neural network (attention-DFCNN) model by utilizing the temporary and spatial characteristics of high-density surface electromyography (HD-sEMG) signals. Contrary to the previous methods, which are mostly base on sEMG temporal features, the attention-DFCNN model can improve the robustness and stability by combining the spatial and temporary features. The performance of the proposed model was compared with other classical method and deep learning methods. We used the dataset provided by The University Medical Center Göttingen. Seven able-bodied subjects and one amputee involved in this work. Each subject executed nine gestures under the electrodes shift (10 mm) and damage (6 channels). As for the electrodes shift 10 mm in four directions (inwards; onwards; upwards; downwards) on seven able-bodied subjects, without any pre-training, the average accuracy of attention-DFCNN (0.942 ± 0.04) is significantly higher than LSDA (0.910 ± 0.04, p < 0.01), CNN (0.920 ± 0.05, p < 0.01), TCN (0.840 ± 0.07, p < 0.01), LSTM (0.864 ± 0.08, p < 0.01), attention-BiLSTM (0.852 ± 0.07, p < 0.01), Transformer (0.903 ± 0.07, p < 0.01) and Swin-Transformer (0.908 ± 0.09, p < 0.01). The proposed attention-DFCNN algorithm and the way of combining the spatial and temporary features of sEMG signals can significantly improve the recognition rate when the HD electrodes grid shift or damage during wear.

20.
Comput Biol Med ; 179: 108817, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39004049

ABSTRACT

Force myography (FMG) is increasingly gaining importance in gesture recognition because of it's ability to achieve high classification accuracy without having a direct contact with the skin. In this study, we investigate the performance of a bracelet with only six commercial force sensitive resistors (FSR) sensors for classifying many hand gestures representing all letters and numbers from 0 to 10 in the American sign language. For this, we introduce an optimized feature selection in combination with the Extreme Learning Machine (ELM) as a classifier by investigating three swarm intelligence algorithms, which are the binary grey wolf optimizer (BGWO), binary grasshopper optimizer (BGOA), and binary hybrid grey wolf particle swarm optimizer (BGWOPSO), which is used as an optimization method for ELM for the first time in this study. The findings reveal that the BGWOPSO, in which PSO supports the GWO optimizer by controlling its exploration and exploitation using inertia constant to improve the convergence speed to reach the best global optima, outperformed the other investigated algorithms. In addition, the results show that optimizing ELM with BGWOPSO for feature selection can efficiently improve the performance of ELM to enhance the classification accuracy from 32% to 69.84% for classifying 37 gestures collected from multiple volunteers and using only a band with 6 FSR sensors.


Subject(s)
Algorithms , Gestures , Humans , Machine Learning , Myography/methods , Male , Female
SELECTION OF CITATIONS
SEARCH DETAIL