Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 3.711
Filter
1.
J Psycholinguist Res ; 53(4): 56, 2024 Jun 26.
Article in English | MEDLINE | ID: mdl-38926243

ABSTRACT

The present paper examines how English native speakers produce scopally ambiguous sentences and how they make use of gestures and prosody for disambiguation. As a case in point, the participants in the present study produced the English negative quantifiers. They appear in two different positions as (1) The election of no candidate was a surprise (a: 'for those elected, none of them was a surprise'; b: 'no candidate was elected, and that was a surprise') and (2) no candidate's election was a surprise (a: 'for those elected, none of them was a surprise'; b: # 'no candidate was elected, and that was a surprise.' We were able to investigate the gesture production and the prosodic patterns of the positional effects (i.e., a-interpretation is available at two different positions in 1 and 2) and the interpretation effects (i.e., two different interpretations are available in the same position in 1). We discovered that the participants tended to launch more head shakes in the (a) interpretation despites the different positions, but more head nod/beat in the (b) interpretation. While there is not a difference in prosody of no in (a) and (b) interpretation in (1), there are pitch and durational differences between (a) interpretations in (1) and (2). This study points out the abstract similarities across languages such as Catalan and Spanish (Prieto et al. in Lingua 131:136-150, 2013. 10.1016/j.lingua.2013.02.008; Tubau et al. in Linguist Rev 32(1):115-142, 2015. 10.1515/tlr-2014-0016) in the gestural movements, and the meaning is crucial for gesture patterns. We emphasize that gesture patterns disambiguate ambiguous interpretation when prosody cannot do so.


Subject(s)
Gestures , Psycholinguistics , Humans , Adult , Male , Female , Speech/physiology , Language , Young Adult
2.
Sensors (Basel) ; 24(12)2024 Jun 09.
Article in English | MEDLINE | ID: mdl-38931542

ABSTRACT

This review explores the historical and current significance of gestures as a universal form of communication with a focus on hand gestures in virtual reality applications. It highlights the evolution of gesture detection systems from the 1990s, which used computer algorithms to find patterns in static images, to the present day where advances in sensor technology, artificial intelligence, and computing power have enabled real-time gesture recognition. The paper emphasizes the role of hand gestures in virtual reality (VR), a field that creates immersive digital experiences through the Ma blending of 3D modeling, sound effects, and sensing technology. This review presents state-of-the-art hardware and software techniques used in hand gesture detection, primarily for VR applications. It discusses the challenges in hand gesture detection, classifies gestures as static and dynamic, and grades their detection difficulty. This paper also reviews the haptic devices used in VR and their advantages and challenges. It provides an overview of the process used in hand gesture acquisition, from inputs and pre-processing to pose detection, for both static and dynamic gestures.


Subject(s)
Gestures , Hand , Virtual Reality , Humans , Hand/physiology , Algorithms , User-Computer Interface , Artificial Intelligence
3.
Sensors (Basel) ; 24(12)2024 Jun 19.
Article in English | MEDLINE | ID: mdl-38931754

ABSTRACT

Electromyography-based gesture recognition has become a challenging problem in the decoding of fine hand movements. Recent research has focused on improving the accuracy of gesture recognition by increasing the complexity of network models. However, training a complex model necessitates a significant amount of data, thereby escalating both user burden and computational costs. Moreover, owing to the considerable variability of surface electromyography (sEMG) signals across different users, conventional machine learning approaches reliant on a single feature fail to meet the demand for precise gesture recognition tailored to individual users. Therefore, to solve the problems of large computational cost and poor cross-user pattern recognition performance, we propose a feature selection method that combines mutual information, principal component analysis and the Pearson correlation coefficient (MPP). This method can filter out the optimal subset of features that match a specific user while combining with an SVM classifier to accurately and efficiently recognize the user's gesture movements. To validate the effectiveness of the above method, we designed an experiment including five gesture actions. The experimental results show that compared to the classification accuracy obtained using a single feature, we achieved an improvement of about 5% with the optimally selected feature as the input to any of the classifiers. This study provides an effective guarantee for user-specific fine hand movement decoding based on sEMG signals.


Subject(s)
Electromyography , Forearm , Gestures , Hand , Pattern Recognition, Automated , Humans , Electromyography/methods , Hand/physiology , Forearm/physiology , Pattern Recognition, Automated/methods , Male , Adult , Principal Component Analysis , Female , Algorithms , Movement/physiology , Young Adult , Support Vector Machine , Machine Learning
4.
J Robot Surg ; 18(1): 245, 2024 Jun 07.
Article in English | MEDLINE | ID: mdl-38847926

ABSTRACT

Previously, our group established a surgical gesture classification system that deconstructs robotic tissue dissection into basic surgical maneuvers. Here, we evaluate gestures by correlating the metric with surgeon experience and technical skill assessment scores in the apical dissection (AD) of robotic-assisted radical prostatectomy (RARP). Additionally, we explore the association between AD performance and early continence recovery following RARP. 78 AD surgical videos from 2016 to 2018 across two international institutions were included. Surgeons were grouped by median robotic caseload (range 80-5,800 cases): less experienced group (< 475 cases) and more experienced (≥ 475 cases). Videos were decoded with gestures and assessed using Dissection Assessment for Robotic Technique (DART). Statistical findings revealed more experienced surgeons (n = 10) used greater proportions of cold cut (p = 0.008) and smaller proportions of peel/push, spread, and two-hand spread (p < 0.05) than less experienced surgeons (n = 10). Correlations between gestures and technical skills assessments ranged from - 0.397 to 0.316 (p < 0.05). Surgeons utilizing more retraction gestures had lower total DART scores (p < 0.01), suggesting less dissection proficiency. Those who used more gestures and spent more time per gesture had lower efficiency scores (p < 0.01). More coagulation and hook gestures were found in cases of patients with continence recovery compared to those with ongoing incontinence (p < 0.04). Gestures performed during AD vary based on surgeon experience level and patient continence recovery duration. Significant correlations were demonstrated between gestures and dissection technical skills. Gestures can serve as a novel method to objectively evaluate dissection performance and anticipate outcomes.


Subject(s)
Clinical Competence , Dissection , Prostatectomy , Robotic Surgical Procedures , Prostatectomy/methods , Humans , Robotic Surgical Procedures/methods , Male , Dissection/methods , Gestures , Prostatic Neoplasms/surgery , Surgeons
5.
Sensors (Basel) ; 24(11)2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38894423

ABSTRACT

Gesture recognition using electromyography (EMG) signals has prevailed recently in the field of human-computer interactions for controlling intelligent prosthetics. Currently, machine learning and deep learning are the two most commonly employed methods for classifying hand gestures. Despite traditional machine learning methods already achieving impressive performance, it is still a huge amount of work to carry out feature extraction manually. The existing deep learning methods utilize complex neural network architectures to achieve higher accuracy, which will suffer from overfitting, insufficient adaptability, and low recognition accuracy. To improve the existing phenomenon, a novel lightweight model named dual stream LSTM feature fusion classifier is proposed based on the concatenation of five time-domain features of EMG signals and raw data, which are both processed with one-dimensional convolutional neural networks and LSTM layers to carry out the classification. The proposed method can effectively capture global features of EMG signals using a simple architecture, which means less computational cost. An experiment is conducted on a public DB1 dataset with 52 gestures, and each of the 27 subjects repeats every gesture 10 times. The accuracy rate achieved by the model is 89.66%, which is comparable to that achieved by more complex deep learning neural networks, and the inference time for each gesture is 87.6 ms, which can also be implied in a real-time control system. The proposed model is validated using a subject-wise experiment on 10 out of the 40 subjects in the DB2 dataset, achieving a mean accuracy of 91.74%. This is illustrated by its ability to fuse time-domain features and raw data to extract more effective information from the sEMG signal and select an appropriate, efficient, lightweight network to enhance the recognition results.


Subject(s)
Deep Learning , Electromyography , Gestures , Neural Networks, Computer , Electromyography/methods , Humans , Signal Processing, Computer-Assisted , Pattern Recognition, Automated/methods , Algorithms , Machine Learning , Hand/physiology , Memory, Short-Term/physiology
6.
Sensors (Basel) ; 24(11)2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38894429

ABSTRACT

Effective feature extraction and selection are crucial for the accurate classification and prediction of hand gestures based on electromyographic signals. In this paper, we systematically compare six filter and wrapper feature evaluation methods and investigate their respective impacts on the accuracy of gesture recognition. The investigation is based on several benchmark datasets and one real hand gesture dataset, including 15 hand force exercises collected from 14 healthy subjects using eight commercial sEMG sensors. A total of 37 time- and frequency-domain features were extracted from each sEMG channel. The benchmark dataset revealed that the minimum Redundancy Maximum Relevance (mRMR) feature evaluation method had the poorest performance, resulting in a decrease in classification accuracy. However, the RFE method demonstrated the potential to enhance classification accuracy across most of the datasets. It selected a feature subset comprising 65 features, which led to an accuracy of 97.14%. The Mutual Information (MI) method selected 200 features to reach an accuracy of 97.38%. The Feature Importance (FI) method reached a higher accuracy of 97.62% but selected 140 features. Further investigations have shown that selecting 65 and 75 features with the RFE methods led to an identical accuracy of 97.14%. A thorough examination of the selected features revealed the potential for three additional features from three specific sensors to enhance the classification accuracy to 97.38%. These results highlight the significance of employing an appropriate feature selection method to significantly reduce the number of necessary features while maintaining classification accuracy. They also underscore the necessity for further analysis and refinement to achieve optimal solutions.


Subject(s)
Electromyography , Gestures , Hand , Humans , Electromyography/methods , Hand/physiology , Algorithms , Male , Adult , Female , Signal Processing, Computer-Assisted
7.
Sensors (Basel) ; 24(11)2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38894473

ABSTRACT

Sign language is an essential means of communication for individuals with hearing disabilities. However, there is a significant shortage of sign language interpreters in some languages, especially in Saudi Arabia. This shortage results in a large proportion of the hearing-impaired population being deprived of services, especially in public places. This paper aims to address this gap in accessibility by leveraging technology to develop systems capable of recognizing Arabic Sign Language (ArSL) using deep learning techniques. In this paper, we propose a hybrid model to capture the spatio-temporal aspects of sign language (i.e., letters and words). The hybrid model consists of a Convolutional Neural Network (CNN) classifier to extract spatial features from sign language data and a Long Short-Term Memory (LSTM) classifier to extract spatial and temporal characteristics to handle sequential data (i.e., hand movements). To demonstrate the feasibility of our proposed hybrid model, we created a dataset of 20 different words, resulting in 4000 images for ArSL: 10 static gesture words and 500 videos for 10 dynamic gesture words. Our proposed hybrid model demonstrates promising performance, with the CNN and LSTM classifiers achieving accuracy rates of 94.40% and 82.70%, respectively. These results indicate that our approach can significantly enhance communication accessibility for the hearing-impaired community in Saudi Arabia. Thus, this paper represents a major step toward promoting inclusivity and improving the quality of life for the hearing impaired.


Subject(s)
Deep Learning , Neural Networks, Computer , Sign Language , Humans , Saudi Arabia , Language , Gestures
8.
J Neuroeng Rehabil ; 21(1): 100, 2024 Jun 12.
Article in English | MEDLINE | ID: mdl-38867287

ABSTRACT

BACKGROUND: In-home rehabilitation systems are a promising, potential alternative to conventional therapy for stroke survivors. Unfortunately, physiological differences between participants and sensor displacement in wearable sensors pose a significant challenge to classifier performance, particularly for people with stroke who may encounter difficulties repeatedly performing trials. This makes it challenging to create reliable in-home rehabilitation systems that can accurately classify gestures. METHODS: Twenty individuals who suffered a stroke performed seven different gestures (mass flexion, mass extension, wrist volar flexion, wrist dorsiflexion, forearm pronation, forearm supination, and rest) related to activities of daily living. They performed these gestures while wearing EMG sensors on the forearm, as well as FMG sensors and an IMU on the wrist. We developed a model based on prototypical networks for one-shot transfer learning, K-Best feature selection, and increased window size to improve model accuracy. Our model was evaluated against conventional transfer learning with neural networks, as well as subject-dependent and subject-independent classifiers: neural networks, LGBM, LDA, and SVM. RESULTS: Our proposed model achieved 82.2% hand-gesture classification accuracy, which was better (P<0.05) than one-shot transfer learning with neural networks (63.17%), neural networks (59.72%), LGBM (65.09%), LDA (63.35%), and SVM (54.5%). In addition, our model performed similarly to subject-dependent classifiers, slightly lower than SVM (83.84%) but higher than neural networks (81.62%), LGBM (80.79%), and LDA (74.89%). Using K-Best features improved the accuracy in 3 of the 6 classifiers used for evaluation, while not affecting the accuracy in the other classifiers. Increasing the window size improved the accuracy of all the classifiers by an average of 4.28%. CONCLUSION: Our proposed model showed significant improvements in hand-gesture recognition accuracy in individuals who have had a stroke as compared with conventional transfer learning, neural networks and traditional machine learning approaches. In addition, K-Best feature selection and increased window size can further improve the accuracy. This approach could help to alleviate the impact of physiological differences and create a subject-independent model for stroke survivors that improves the classification accuracy of wearable sensors. TRIAL REGISTRATION NUMBER: The study was registered in Chinese Clinical Trial Registry with registration number CHiCTR1800017568 in 2018/08/04.


Subject(s)
Gestures , Hand , Neural Networks, Computer , Stroke Rehabilitation , Humans , Stroke Rehabilitation/methods , Stroke Rehabilitation/instrumentation , Hand/physiopathology , Male , Female , Middle Aged , Stroke/complications , Stroke/physiopathology , Aged , Machine Learning , Transfer, Psychology/physiology , Adult , Electromyography , Wearable Electronic Devices
9.
PLoS One ; 19(6): e0288670, 2024.
Article in English | MEDLINE | ID: mdl-38870182

ABSTRACT

Through our respiratory system, many viruses and diseases frequently spread and pass from one person to another. Covid-19 served as an example of how crucial it is to track down and cut back on contacts to stop its spread. There is a clear gap in finding automatic methods that can detect hand-to-face contact in complex urban scenes or indoors. In this paper, we introduce a computer vision framework, called FaceTouch, based on deep learning. It comprises deep sub-models to detect humans and analyse their actions. FaceTouch seeks to detect hand-to-face touches in the wild, such as through video chats, bus footage, or CCTV feeds. Despite partial occlusion of faces, the introduced system learns to detect face touches from the RGB representation of a given scene by utilising the representation of the body gestures such as arm movement. This has been demonstrated to be useful in complex urban scenarios beyond simply identifying hand movement and its closeness to faces. Relying on Supervised Contrastive Learning, the introduced model is trained on our collected dataset, given the absence of other benchmark datasets. The framework shows a strong validation in unseen datasets which opens the door for potential deployment.


Subject(s)
COVID-19 , Humans , SARS-CoV-2/isolation & purification , Touch/physiology , Deep Learning , Hand/physiology , Contact Tracing/methods , Supervised Machine Learning , Gestures , Face
10.
Nat Commun ; 15(1): 4791, 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38839754

ABSTRACT

The planum temporale (PT), a key language area, is specialized in the left hemisphere in prelinguistic infants and considered as a marker of the pre-wired language-ready brain. However, studies have reported a similar structural PT left-asymmetry not only in various adult non-human primates, but also in newborn baboons. Its shared functional links with language are not fully understood. Here we demonstrate using previously obtained MRI data that early detection of PT left-asymmetry among 27 newborn baboons (Papio anubis, age range of 4 days to 2 months) predicts the future development of right-hand preference for communicative gestures but not for non-communicative actions. Specifically, only newborns with a larger left-than-right PT were more likely to develop a right-handed communication once juvenile, a contralateral brain-gesture link which is maintained in a group of 70 mature baboons. This finding suggests that early PT asymmetry may be a common inherited prewiring of the primate brain for the ontogeny of ancient lateralised properties shared between monkey gesture and human language.


Subject(s)
Animals, Newborn , Functional Laterality , Gestures , Magnetic Resonance Imaging , Animals , Functional Laterality/physiology , Female , Male , Papio anubis , Temporal Lobe/physiology , Temporal Lobe/diagnostic imaging , Language
11.
Sci Rep ; 14(1): 14873, 2024 06 27.
Article in English | MEDLINE | ID: mdl-38937537

ABSTRACT

Smart gloves are in high demand for entertainment, manufacturing, and rehabilitation. However, designing smart gloves has been complex and costly due to trial and error. We propose an open simulation platform for designing smart gloves, including optimal sensor placement and deep learning models for gesture recognition, with reduced costs and manual effort. Our pipeline starts with 3D hand pose extraction from videos and extends to the refinement and conversion of the poses into hand joint angles based on inverse kinematics, the sensor placement optimization based on hand joint analysis, and the training of deep learning models using simulated sensor data. In comparison to the existing platforms that always require precise motion data as input, our platform takes monocular videos, which can be captured with widely available smartphones or web cameras, as input and integrates novel approaches to minimize the impact of the errors induced by imprecise motion extraction from videos. Moreover, our platform enables more efficient sensor placement selection. We demonstrate how the pipeline works and how it delivers a sensible design for smart gloves in a real-life case study. We also evaluate the performance of each building block and its impact on the reliability of the generated design.


Subject(s)
Gestures , Humans , Hand/physiology , Deep Learning , Biomechanical Phenomena , Computer Simulation , Equipment Design
12.
Article in English | MEDLINE | ID: mdl-38869995

ABSTRACT

Gesture recognition is crucial for enhancing human-computer interaction and is particularly pivotal in rehabilitation contexts, aiding individuals recovering from physical impairments and significantly improving their mobility and interactive capabilities. However, current wearable hand gesture recognition approaches are often limited in detection performance, wearability, and generalization. We thus introduce EchoGest, a novel hand gesture recognition system based on soft, stretchable, transparent artificial skin with integrated ultrasonic waveguides. Our presented system is the first to use soft ultrasonic waveguides for hand gesture recognition. EcoflexTM 00-31 and EcoflexTM 00-45 Near ClearTM silicone elastomers were employed to fabricate the artificial skin and ultrasonic waveguides, while 0.1 mm diameter silver-plated copper wires connected the transducers in the waveguides to the electrical system. The wires are enclosed within an additional elastomer layer, achieving a sensing skin with a total thickness of around 500 µ m. Ten participants wore the EchoGest system and performed static hand gestures from two gesture sets: 8 daily life gestures and 10 American Sign Language (ASL) digits 0-9. Leave-One-Subject-Out Cross-Validation analysis demonstrated accuracies of 91.13% for daily life gestures and 88.5% for ASL gestures. The EchoGest system has significant potential in rehabilitation, particularly for tracking and evaluating hand mobility, which could substantially reduce the workload of therapists in both clinical and home-based settings. Integrating this technology could revolutionize hand gesture recognition applications, from real-time sign language translation to innovative rehabilitation techniques.


Subject(s)
Gestures , Hand , Pattern Recognition, Automated , Wearable Electronic Devices , Humans , Female , Hand/physiology , Adult , Male , Pattern Recognition, Automated/methods , Young Adult , Ultrasonics , Algorithms , Silicone Elastomers , Skin , Reproducibility of Results
13.
Math Biosci Eng ; 21(4): 5712-5734, 2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38872555

ABSTRACT

This research introduces a novel dual-pathway convolutional neural network (DP-CNN) architecture tailored for robust performance in Log-Mel spectrogram image analysis derived from raw multichannel electromyography signals. The primary objective is to assess the effectiveness of the proposed DP-CNN architecture across three datasets (NinaPro DB1, DB2, and DB3), encompassing both able-bodied and amputee subjects. Performance metrics, including accuracy, precision, recall, and F1-score, are employed for comprehensive evaluation. The DP-CNN demonstrates notable mean accuracies of 94.93 ± 1.71% and 94.00 ± 3.65% on NinaPro DB1 and DB2 for healthy subjects, respectively. Additionally, it achieves a robust mean classification accuracy of 85.36 ± 0.82% on amputee subjects in DB3, affirming its efficacy. Comparative analysis with previous methodologies on the same datasets reveals substantial improvements of 28.33%, 26.92%, and 39.09% over the baseline for DB1, DB2, and DB3, respectively. The DP-CNN's superior performance extends to comparisons with transfer learning models for image classification, reaffirming its efficacy. Across diverse datasets involving both able-bodied and amputee subjects, the DP-CNN exhibits enhanced capabilities, holding promise for advancing myoelectric control.


Subject(s)
Algorithms , Amputees , Electromyography , Gestures , Neural Networks, Computer , Signal Processing, Computer-Assisted , Upper Extremity , Humans , Electromyography/methods , Upper Extremity/physiology , Male , Adult , Female , Young Adult , Middle Aged , Reproducibility of Results
14.
J Neural Eng ; 21(3)2024 Jun 20.
Article in English | MEDLINE | ID: mdl-38806038

ABSTRACT

Objective. Decoding gestures from the upper limb using noninvasive surface electromyogram (sEMG) signals is of keen interest for the rehabilitation of amputees, artificial supernumerary limb augmentation, gestural control of computers, and virtual/augmented realities. We show that sEMG signals recorded across an array of sensor electrodes in multiple spatial locations around the forearm evince a rich geometric pattern of global motor unit (MU) activity that can be leveraged to distinguish different hand gestures.Approach. We demonstrate a simple technique to analyze spatial patterns of muscle MU activity within a temporal window and show that distinct gestures can be classified in both supervised and unsupervised manners. Specifically, we construct symmetric positive definite covariance matrices to represent the spatial distribution of MU activity in a time window of interest, calculated as pairwise covariance of electrical signals measured across different electrodes.Main results. This allows us to understand and manipulate multivariate sEMG timeseries on a more natural subspace-the Riemannian manifold. Furthermore, it directly addresses signal variability across individuals and sessions, which remains a major challenge in the field. sEMG signals measured at a single electrode lack contextual information such as how various anatomical and physiological factors influence the signals and how their combined effect alters the evident interaction among neighboring muscles.Significance. As we show here, analyzing spatial patterns using covariance matrices on Riemannian manifolds allows us to robustly model complex interactions across spatially distributed MUs and provides a flexible and transparent framework to quantify differences in sEMG signals across individuals. The proposed method is novel in the study of sEMG signals and its performance exceeds the current benchmarks while being computationally efficient.


Subject(s)
Electromyography , Gestures , Hand , Muscle, Skeletal , Humans , Electromyography/methods , Hand/physiology , Male , Female , Adult , Muscle, Skeletal/physiology , Young Adult , Algorithms
15.
Autism Res ; 17(5): 989-1000, 2024 May.
Article in English | MEDLINE | ID: mdl-38690644

ABSTRACT

Prior work examined how minimally verbal (MV) children with autism used their gestural communication during social interactions. However, interactions are exchanges between social partners. Examining parent-child social interactions is critically important given the influence of parent responsivity on children's communicative development. Specifically, parent responses that are semantically contingent to the child's communication plays an important role in further shaping children's language learning. This study examines whether MV autistic children's (N = 47; 48-95 months; 10 females) modality and form of communication are associated with parent responsivity during an in-home parent-child interaction (PCI). The PCI was collected using natural language sampling methods and coded for child modality and form of communication and parent responses. Findings from Kruskal-Wallis H tests revealed that there was no significant difference in parent semantically contingent responses based on child communication modality (spoken language, gesture, gesture-speech combinations, and AAC) and form of communication (precise vs. imprecise). Findings highlight the importance of examining multiple modalities and forms of communication in MV children with autism to obtain a more comprehensive understanding of their communication abilities; and underscore the inclusion of interactionist models of communication to examine children's input on parent responses in further shaping language learning experiences.


Subject(s)
Autistic Disorder , Communication , Parent-Child Relations , Humans , Female , Male , Child , Child, Preschool , Autistic Disorder/psychology , Gestures , Parents , Language Development , Speech
16.
J Neural Eng ; 21(3)2024 May 17.
Article in English | MEDLINE | ID: mdl-38722304

ABSTRACT

Discrete myoelectric control-based gesture recognition has recently gained interest as a possible input modality for many emerging ubiquitous computing applications. Unlike the continuous control commonly employed in powered prostheses, discrete systems seek to recognize the dynamic sequences associated with gestures to generate event-based inputs. More akin to those used in general-purpose human-computer interaction, these could include, for example, a flick of the wrist to dismiss a phone call or a double tap of the index finger and thumb to silence an alarm. Moelectric control systems have been shown to achieve near-perfect classification accuracy, but in highly constrained offline settings. Real-world, online systems are subject to 'confounding factors' (i.e. factors that hinder the real-world robustness of myoelectric control that are not accounted for during typical offline analyses), which inevitably degrade system performance, limiting their practical use. Although these factors have been widely studied in continuous prosthesis control, there has been little exploration of their impacts on discrete myoelectric control systems for emerging applications and use cases. Correspondingly, this work examines, for the first time, three confounding factors and their effect on the robustness of discrete myoelectric control: (1)limb position variability, (2)cross-day use, and a newly identified confound faced by discrete systems (3)gesture elicitation speed. Results from four different discrete myoelectric control architectures: (1) Majority Vote LDA, (2) Dynamic Time Warping, (3) an LSTM network trained with Cross Entropy, and (4) an LSTM network trained with Contrastive Learning, show that classification accuracy is significantly degraded (p<0.05) as a result of each of these confounds. This work establishes that confounding factors are a critical barrier that must be addressed to enable the real-world adoption of discrete myoelectric control for robust and reliable gesture recognition.


Subject(s)
Electromyography , Gestures , Pattern Recognition, Automated , Humans , Electromyography/methods , Male , Pattern Recognition, Automated/methods , Female , Adult , Young Adult , Artificial Limbs
17.
Appl Ergon ; 119: 104306, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38714102

ABSTRACT

The 5.0 industry promotes collaborative robots (cobots). This research studies the impacts of cobot collaboration using an experimental setup. 120 participants realized a simple and a complex assembly task. 50% collaborated with another human (H/H) and 50% with a cobot (H/C). The workload and the acceptability of the cobotic collaboration were measured. Working with a cobot decreases the effect of the task complexity on the human workload and on the output quality. However, it increases the time completion and the number of gestures (while decreasing their frequency). The H/C couples have a higher chance of success but they take more time and more gestures to realize the task. The results of this research could help developers and stakeholders to understand the impacts of implementing a cobot in production chains.


Subject(s)
Cooperative Behavior , Gestures , Robotics , Task Performance and Analysis , Workload , Humans , Workload/psychology , Male , Female , Adult , Young Adult , Man-Machine Systems , Time Factors
18.
Article in English | MEDLINE | ID: mdl-38771682

ABSTRACT

Gesture recognition has emerged as a significant research domain in computer vision and human-computer interaction. One of the key challenges in gesture recognition is how to select the most useful channels that can effectively represent gesture movements. In this study, we have developed a channel selection algorithm that determines the number and placement of sensors that are critical to gesture classification. To validate this algorithm, we constructed a Force Myography (FMG)-based signal acquisition system. The algorithm considers each sensor as a distinct channel, with the most effective channel combinations and recognition accuracy determined through assessing the correlation between each channel and the target gesture, as well as the redundant correlation between different channels. The database was created by collecting experimental data from 10 healthy individuals who wore 16 sensors to perform 13 unique hand gestures. The results indicate that the average number of channels across the 10 participants was 3, corresponding to an 75% decrease in the initial channel count, with an average recognition accuracy of 94.46%. This outperforms four widely adopted feature selection algorithms, including Relief-F, mRMR, CFS, and ILFS. Moreover, we have established a universal model for the position of gesture measurement points and verified it with an additional five participants, resulting in an average recognition accuracy of 96.3%. This study provides a sound basis for identifying the optimal and minimum number and location of channels on the forearm and designing specialized arm rings with unique shapes.


Subject(s)
Algorithms , Gestures , Pattern Recognition, Automated , Humans , Male , Female , Adult , Pattern Recognition, Automated/methods , Young Adult , Myography/methods , Hand/physiology , Healthy Volunteers , Reproducibility of Results
19.
J Med Internet Res ; 26: e58390, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38742989

ABSTRACT

Posttraumatic stress disorder (PTSD) is a significant public health concern, with only a third of patients recovering within a year of treatment. While PTSD often disrupts the sense of body ownership and sense of agency (SA), attention to the SA in trauma has been lacking. This perspective paper explores the loss of the SA in PTSD and its relevance in the development of symptoms. Trauma is viewed as a breakdown of the SA, related to a freeze response, with peritraumatic dissociation increasing the risk of PTSD. Drawing from embodied cognition, we propose an enactive perspective of PTSD, suggesting therapies that restore the SA through direct engagement with the body and environment. We discuss the potential of agency-based therapies and innovative technologies such as gesture sonification, which translates body movements into sounds to enhance the SA. Gesture sonification offers a screen-free, noninvasive approach that could complement existing trauma-focused therapies. We emphasize the need for interdisciplinary collaboration and clinical research to further explore these approaches in preventing and treating PTSD.


Subject(s)
Stress Disorders, Post-Traumatic , Humans , Stress Disorders, Post-Traumatic/therapy , Stress Disorders, Post-Traumatic/psychology , Gestures
20.
J Acoust Soc Am ; 155(5): 3521-3536, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38809098

ABSTRACT

This electromagnetic articulography study explores the kinematic profile of Intonational Phrase boundaries in Seoul Korean. Recent findings suggest that the scope of phrase-final lengthening is conditioned by word- and/or phrase-level prominence. However, evidence comes mainly from head-prominence languages, which conflate positions of word prosody with positions of phrasal prominence. Here, we examine phrase-final lengthening in Seoul Korean, an edge-prominence language with no word prosody, with respect to focus location as an index of phrase-level prominence and Accentual Phrase (AP) length as an index of word demarcation. Results show that phrase-final lengthening extends over the phrase-final syllable. The effect is greater the further away that focus occurs. It also interacts with the domains of AP and prosodic word: lengthening is greater in smaller APs, whereas shortening is observed in the initial gesture of the phrase-final word. Additional analyses of kinematic displacement and peak velocity revealed that Korean phrase-final gestures bear the kinematic profile of IP boundaries concurrently to what is typically considered prominence marking. Based on these results, a gestural coordination account is proposed, in which boundary-related events interact systematically with phrase-level prominence as well as lower prosodic levels, and how this proposal relates to the findings in head-prominence languages is discussed.


Subject(s)
Phonetics , Speech Acoustics , Humans , Male , Female , Young Adult , Biomechanical Phenomena , Adult , Language , Gestures , Speech Production Measurement , Republic of Korea , Voice Quality , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...