Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 324
Filtrar
1.
Sensors (Basel) ; 24(13)2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-39000981

RESUMO

This work presents a novel approach for elbow gesture recognition using an array of inductive sensors and a machine learning algorithm (MLA). This paper describes the design of the inductive sensor array integrated into a flexible and wearable sleeve. The sensor array consists of coils sewn onto the sleeve, which form an LC tank circuit along with the externally connected inductors and capacitors. Changes in the elbow position modulate the inductance of these coils, allowing the sensor array to capture a range of elbow movements. The signal processing and random forest MLA to recognize 10 different elbow gestures are described. Rigorous evaluation on 8 subjects and data augmentation, which leveraged the dataset to 1270 trials per gesture, enabled the system to achieve remarkable accuracy of 98.3% and 98.5% using 5-fold cross-validation and leave-one-subject-out cross-validation, respectively. The test performance was then assessed using data collected from five new subjects. The high classification accuracy of 94% demonstrates the generalizability of the designed system. The proposed solution addresses the limitations of existing elbow gesture recognition designs and offers a practical and effective approach for intuitive human-machine interaction.


Assuntos
Algoritmos , Cotovelo , Gestos , Aprendizado de Máquina , Humanos , Cotovelo/fisiologia , Dispositivos Eletrônicos Vestíveis , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Masculino , Adulto , Feminino
2.
Comput Biol Med ; 179: 108817, 2024 Jul 13.
Artigo em Inglês | MEDLINE | ID: mdl-39004049

RESUMO

Force myography (FMG) is increasingly gaining importance in gesture recognition because of it's ability to achieve high classification accuracy without having a direct contact with the skin. In this study, we investigate the performance of a bracelet with only six commercial force sensitive resistors (FSR) sensors for classifying many hand gestures representing all letters and numbers from 0 to 10 in the American sign language. For this, we introduce an optimized feature selection in combination with the Extreme Learning Machine (ELM) as a classifier by investigating three swarm intelligence algorithms, which are the binary grey wolf optimizer (BGWO), binary grasshopper optimizer (BGOA), and binary hybrid grey wolf particle swarm optimizer (BGWOPSO), which is used as an optimization method for ELM for the first time in this study. The findings reveal that the BGWOPSO, in which PSO supports the GWO optimizer by controlling its exploration and exploitation using inertia constant to improve the convergence speed to reach the best global optima, outperformed the other investigated algorithms. In addition, the results show that optimizing ELM with BGWOPSO for feature selection can efficiently improve the performance of ELM to enhance the classification accuracy from 32% to 69.84% for classifying 37 gestures collected from multiple volunteers and using only a band with 6 FSR sensors.

3.
Int J Biol Macromol ; 276(Pt 1): 133802, 2024 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-38992552

RESUMO

Pursuing high-performance conductive hydrogels is still hot topic in development of advanced flexible wearable devices. Herein, a tough, self-healing, adhesive double network (DN) conductive hydrogel (named as OSA-(Gelatin/PAM)-Ca, O-(G/P)-Ca) was prepared by bridging gelatin and polyacrylamide network with functionalized polysaccharide (oxidized sodium alginate, OSA) through Schiff base reaction. Thanks to the presence of multiple interactions (Schiff base bond, hydrogen bond, and metal coordination) within the network, the prepared hydrogel showed outstanding mechanical properties (tensile strain of 2800 % and stress of 630 kPa), high conductivity (0.72 S/m), repeatable adhesion performance and excellent self-healing ability (83.6 %/79.0 % of the original tensile strain/stress after self-healing). Moreover, the hydrogel-based sensor exhibited high strain sensitivity (GF = 3.66) and fast response time (<0.5 s), which can be used to monitor a wide range of human physiological signals. Based on this, excellent compression sensitivity (GF = 0.41 kPa-1 in the range of 90-120 kPa), a three-dimensional (3D) array of flexible sensor was designed to monitor the intensity of pressure and spatial force distribution. In addition, a gel-based wearable sensor was accurately classified and recognized ten types of gestures, achieving an accuracy rate of >96.33 % both before and after self-healing under three machine learning models (the decision tree, SVM, and KNN). This paper provides a simple method to prepare tough and self-healing conductive hydrogel as flexible multifunctional sensor devices for versatile applications in fields such as healthcare monitoring, human-computer interaction, and artificial intelligence.

5.
Adv Sci (Weinh) ; : e2402175, 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38981031

RESUMO

A self-powered mechanoreceptor array is demonstrated using four mechanoreceptor cells for recognition of dynamic touch gestures. Each cell consists of a triboelectric nanogenerator (TENG) for touch sensing and a bi-stable resistor (biristor) for spike encoding. It produces informative spike signals by sensing a force of an external touch and encoding the force into the number of spikes. An array of the mechanoreceptor cells is utilized to monitor various touch gestures and it successfully generated spike signals corresponding to all the gestures. To validate the practicality of the mechanoreceptor array, a spiking neural network (SNN), highly attractive for power consumption compared to the conventional von Neumann architecture, is used for the identification of touch gestures. The measured spiking signals are reflected as inputs for the SNN simulations. Consequently, touch gestures are classified with a high accuracy rate of 92.5%. The proposed mechanoreceptor array emerges as a promising candidate for a building block of tactile in-sensor computing in the era of the Internet of Things (IoT), due to the low cost and high manufacturability of the TENG. This eliminates the need for a power supply, coupled with the intrinsic high throughput of the Si-based biristor employing complementary metal-oxide-semiconductor (CMOS) technology.

6.
J Electr Bioimpedance ; 15(1): 63-74, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38863504

RESUMO

Gesture recognition is a crucial aspect in the advancement of virtual reality, healthcare, and human-computer interaction, and requires innovative methodologies to meet the increasing demands for precision. This paper presents a novel approach that combines Impedance Signal Spectrum Analysis (ISSA) with machine learning to improve gesture recognition precision. A diverse dataset that included participants from various demographic backgrounds (five individuals) who were each executing a range of predefined gestures. The predefined gestures were designed to encompass a broad spectrum of hand movements, including intricate and subtle variations, to challenge the robustness of the proposed methodology. The machine learning model using the K-Nearest Neighbors (KNN), Gradient Boosting Machine (GBM), Naive Bayes (NB), Logistic Regression (LR), Random Forest (RF), and Support Vector Machine (SVM) algorithms demonstrated notable precision in performance evaluations. The individual accuracy values for each algorithm are as follows: KNN, 86%; GBM, 86%; NB, 84%; LR, 89%; RF, 87%; and SVM, 87%. These results emphasize the importance of impedance features in the refinement of gesture recognition. The adaptability of the model was confirmed under different conditions, highlighting its broad applicability.

7.
Math Biosci Eng ; 21(4): 5712-5734, 2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38872555

RESUMO

This research introduces a novel dual-pathway convolutional neural network (DP-CNN) architecture tailored for robust performance in Log-Mel spectrogram image analysis derived from raw multichannel electromyography signals. The primary objective is to assess the effectiveness of the proposed DP-CNN architecture across three datasets (NinaPro DB1, DB2, and DB3), encompassing both able-bodied and amputee subjects. Performance metrics, including accuracy, precision, recall, and F1-score, are employed for comprehensive evaluation. The DP-CNN demonstrates notable mean accuracies of 94.93 ± 1.71% and 94.00 ± 3.65% on NinaPro DB1 and DB2 for healthy subjects, respectively. Additionally, it achieves a robust mean classification accuracy of 85.36 ± 0.82% on amputee subjects in DB3, affirming its efficacy. Comparative analysis with previous methodologies on the same datasets reveals substantial improvements of 28.33%, 26.92%, and 39.09% over the baseline for DB1, DB2, and DB3, respectively. The DP-CNN's superior performance extends to comparisons with transfer learning models for image classification, reaffirming its efficacy. Across diverse datasets involving both able-bodied and amputee subjects, the DP-CNN exhibits enhanced capabilities, holding promise for advancing myoelectric control.


Assuntos
Algoritmos , Amputados , Eletromiografia , Gestos , Redes Neurais de Computação , Processamento de Sinais Assistido por Computador , Extremidade Superior , Humanos , Eletromiografia/métodos , Extremidade Superior/fisiologia , Masculino , Adulto , Feminino , Adulto Jovem , Pessoa de Meia-Idade , Reprodutibilidade dos Testes
8.
J Neuroeng Rehabil ; 21(1): 100, 2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38867287

RESUMO

BACKGROUND: In-home rehabilitation systems are a promising, potential alternative to conventional therapy for stroke survivors. Unfortunately, physiological differences between participants and sensor displacement in wearable sensors pose a significant challenge to classifier performance, particularly for people with stroke who may encounter difficulties repeatedly performing trials. This makes it challenging to create reliable in-home rehabilitation systems that can accurately classify gestures. METHODS: Twenty individuals who suffered a stroke performed seven different gestures (mass flexion, mass extension, wrist volar flexion, wrist dorsiflexion, forearm pronation, forearm supination, and rest) related to activities of daily living. They performed these gestures while wearing EMG sensors on the forearm, as well as FMG sensors and an IMU on the wrist. We developed a model based on prototypical networks for one-shot transfer learning, K-Best feature selection, and increased window size to improve model accuracy. Our model was evaluated against conventional transfer learning with neural networks, as well as subject-dependent and subject-independent classifiers: neural networks, LGBM, LDA, and SVM. RESULTS: Our proposed model achieved 82.2% hand-gesture classification accuracy, which was better (P<0.05) than one-shot transfer learning with neural networks (63.17%), neural networks (59.72%), LGBM (65.09%), LDA (63.35%), and SVM (54.5%). In addition, our model performed similarly to subject-dependent classifiers, slightly lower than SVM (83.84%) but higher than neural networks (81.62%), LGBM (80.79%), and LDA (74.89%). Using K-Best features improved the accuracy in 3 of the 6 classifiers used for evaluation, while not affecting the accuracy in the other classifiers. Increasing the window size improved the accuracy of all the classifiers by an average of 4.28%. CONCLUSION: Our proposed model showed significant improvements in hand-gesture recognition accuracy in individuals who have had a stroke as compared with conventional transfer learning, neural networks and traditional machine learning approaches. In addition, K-Best feature selection and increased window size can further improve the accuracy. This approach could help to alleviate the impact of physiological differences and create a subject-independent model for stroke survivors that improves the classification accuracy of wearable sensors. TRIAL REGISTRATION NUMBER: The study was registered in Chinese Clinical Trial Registry with registration number CHiCTR1800017568 in 2018/08/04.


Assuntos
Gestos , Mãos , Redes Neurais de Computação , Reabilitação do Acidente Vascular Cerebral , Humanos , Reabilitação do Acidente Vascular Cerebral/métodos , Reabilitação do Acidente Vascular Cerebral/instrumentação , Mãos/fisiopatologia , Masculino , Feminino , Pessoa de Meia-Idade , Acidente Vascular Cerebral/complicações , Acidente Vascular Cerebral/fisiopatologia , Idoso , Aprendizado de Máquina , Transferência de Experiência/fisiologia , Adulto , Eletromiografia , Dispositivos Eletrônicos Vestíveis
9.
Sensors (Basel) ; 24(12)2024 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-38931754

RESUMO

Electromyography-based gesture recognition has become a challenging problem in the decoding of fine hand movements. Recent research has focused on improving the accuracy of gesture recognition by increasing the complexity of network models. However, training a complex model necessitates a significant amount of data, thereby escalating both user burden and computational costs. Moreover, owing to the considerable variability of surface electromyography (sEMG) signals across different users, conventional machine learning approaches reliant on a single feature fail to meet the demand for precise gesture recognition tailored to individual users. Therefore, to solve the problems of large computational cost and poor cross-user pattern recognition performance, we propose a feature selection method that combines mutual information, principal component analysis and the Pearson correlation coefficient (MPP). This method can filter out the optimal subset of features that match a specific user while combining with an SVM classifier to accurately and efficiently recognize the user's gesture movements. To validate the effectiveness of the above method, we designed an experiment including five gesture actions. The experimental results show that compared to the classification accuracy obtained using a single feature, we achieved an improvement of about 5% with the optimally selected feature as the input to any of the classifiers. This study provides an effective guarantee for user-specific fine hand movement decoding based on sEMG signals.


Assuntos
Eletromiografia , Antebraço , Gestos , Mãos , Reconhecimento Automatizado de Padrão , Humanos , Eletromiografia/métodos , Mãos/fisiologia , Antebraço/fisiologia , Reconhecimento Automatizado de Padrão/métodos , Masculino , Adulto , Análise de Componente Principal , Feminino , Algoritmos , Movimento/fisiologia , Adulto Jovem , Máquina de Vetores de Suporte , Aprendizado de Máquina
10.
Sensors (Basel) ; 24(11)2024 May 25.
Artigo em Inglês | MEDLINE | ID: mdl-38894205

RESUMO

By integrating sensing capability into wireless communication, wireless sensing technology has become a promising contactless and non-line-of-sight sensing paradigm to explore the dynamic characteristics of channel state information (CSI) for recognizing human behaviors. In this paper, we develop an effective device-free human gesture recognition (HGR) system based on WiFi wireless sensing technology in which the complementary CSI amplitude and phase of communication link are jointly exploited. To improve the quality of collected CSI, a linear transform-based data processing method is first used to eliminate the phase offset and noise and to reduce the impact of multi-path effects. Then, six different time and frequency domain features are chosen for both amplitude and phase, including the mean, variance, root mean square, interquartile range, energy entropy and power spectral entropy, and a feature selection algorithm to remove irrelevant and redundant features is proposed based on filtering and principal component analysis methods, resulting in the construction of a feature subspace to distinguish different gestures. On this basis, a support vector machine-based stacking algorithm is proposed for gesture classification based on the selected and complementary amplitude and phase features. Lastly, we conduct experiments under a practical scenario with one transmitter and receiver. The results demonstrate that the average accuracy of the proposed HGR system is 98.3% and that the F1-score is over 97%.

11.
Sensors (Basel) ; 24(11)2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38894423

RESUMO

Gesture recognition using electromyography (EMG) signals has prevailed recently in the field of human-computer interactions for controlling intelligent prosthetics. Currently, machine learning and deep learning are the two most commonly employed methods for classifying hand gestures. Despite traditional machine learning methods already achieving impressive performance, it is still a huge amount of work to carry out feature extraction manually. The existing deep learning methods utilize complex neural network architectures to achieve higher accuracy, which will suffer from overfitting, insufficient adaptability, and low recognition accuracy. To improve the existing phenomenon, a novel lightweight model named dual stream LSTM feature fusion classifier is proposed based on the concatenation of five time-domain features of EMG signals and raw data, which are both processed with one-dimensional convolutional neural networks and LSTM layers to carry out the classification. The proposed method can effectively capture global features of EMG signals using a simple architecture, which means less computational cost. An experiment is conducted on a public DB1 dataset with 52 gestures, and each of the 27 subjects repeats every gesture 10 times. The accuracy rate achieved by the model is 89.66%, which is comparable to that achieved by more complex deep learning neural networks, and the inference time for each gesture is 87.6 ms, which can also be implied in a real-time control system. The proposed model is validated using a subject-wise experiment on 10 out of the 40 subjects in the DB2 dataset, achieving a mean accuracy of 91.74%. This is illustrated by its ability to fuse time-domain features and raw data to extract more effective information from the sEMG signal and select an appropriate, efficient, lightweight network to enhance the recognition results.


Assuntos
Aprendizado Profundo , Eletromiografia , Gestos , Redes Neurais de Computação , Eletromiografia/métodos , Humanos , Processamento de Sinais Assistido por Computador , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Aprendizado de Máquina , Mãos/fisiologia , Memória de Curto Prazo/fisiologia
12.
Sensors (Basel) ; 24(11)2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38894429

RESUMO

Effective feature extraction and selection are crucial for the accurate classification and prediction of hand gestures based on electromyographic signals. In this paper, we systematically compare six filter and wrapper feature evaluation methods and investigate their respective impacts on the accuracy of gesture recognition. The investigation is based on several benchmark datasets and one real hand gesture dataset, including 15 hand force exercises collected from 14 healthy subjects using eight commercial sEMG sensors. A total of 37 time- and frequency-domain features were extracted from each sEMG channel. The benchmark dataset revealed that the minimum Redundancy Maximum Relevance (mRMR) feature evaluation method had the poorest performance, resulting in a decrease in classification accuracy. However, the RFE method demonstrated the potential to enhance classification accuracy across most of the datasets. It selected a feature subset comprising 65 features, which led to an accuracy of 97.14%. The Mutual Information (MI) method selected 200 features to reach an accuracy of 97.38%. The Feature Importance (FI) method reached a higher accuracy of 97.62% but selected 140 features. Further investigations have shown that selecting 65 and 75 features with the RFE methods led to an identical accuracy of 97.14%. A thorough examination of the selected features revealed the potential for three additional features from three specific sensors to enhance the classification accuracy to 97.38%. These results highlight the significance of employing an appropriate feature selection method to significantly reduce the number of necessary features while maintaining classification accuracy. They also underscore the necessity for further analysis and refinement to achieve optimal solutions.


Assuntos
Eletromiografia , Gestos , Mãos , Humanos , Eletromiografia/métodos , Mãos/fisiologia , Algoritmos , Masculino , Adulto , Feminino , Processamento de Sinais Assistido por Computador
13.
Biomed Tech (Berl) ; 2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38826069

RESUMO

OBJECTIVES: The objective of this study is to develop a system for automatic sign language recognition to improve the quality of life for the mute-deaf community in Egypt. The system aims to bridge the communication gap by identifying and converting right-hand gestures into audible sounds or displayed text. METHODS: To achieve the objectives, a convolutional neural network (CNN) model is employed. The model is trained to recognize right-hand gestures captured by an affordable web camera. A dataset was created with the help of six volunteers for training, testing, and validation purposes. RESULTS: The proposed system achieved an impressive average accuracy of 99.65 % in recognizing right-hand gestures, with high precision value of 95.11 %. The system effectively addressed the issue of gesture similarity between certain alphabets by successfully distinguishing between their respective gestures. CONCLUSIONS: The proposed system offers a promising solution for automatic sign language recognition, benefiting the mute-deaf community in Egypt. By accurately identifying and converting right-hand gestures, the system facilitates communication and interaction with the wider world. This technology has the potential to greatly enhance the quality of life for individuals who are unable to speak or hear, promoting inclusivity and accessibility.

14.
Comput Struct Biotechnol J ; 24: 393-403, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38800692

RESUMO

Background and objective: Medical image visualization is a requirement in many types of surgery such as orthopaedic, spinal, thoracic procedures or tumour resection to eliminate risk such as "wrong level surgery". However, direct contact with physical devices such as mice or touch screens to control images is a challenge because of the potential risk of infection. To prevent the spread of infection in sterile environments, a contagious infection-free medical interaction system has been developed for manipulating medical images. Methods: We proposed an integrated system with three key modules: hand landmark detection, hand pointing, and hand gesture recognition. A proposed depth enhancement algorithm is combined with a deep learning hand landmark detector to generate hand landmarks. Based on the designed system, a proposed hand-pointing system combined with projection and ray-pointing techniques allows for reducing fatigue during manipulation. A proposed landmark geometry constraint algorithm and deep learning method were applied to detect six gestures including click, open, close, zoom, drag, and rotation. Additionally, a control menu was developed to effectively activate common functions. Results: The proposed hand-pointing system allowed for a large control range of up to 1200 mm in both vertical and horizontal direction. The proposed hand gesture recognition method showed high accuracy of over 97% and real-time response. Conclusion: This paper described the contagious infection-free medical interaction system that enables precise and effective manipulation of medical images within the large control range, while minimizing hand fatigue.

15.
J Neural Eng ; 21(3)2024 May 17.
Artigo em Inglês | MEDLINE | ID: mdl-38722304

RESUMO

Discrete myoelectric control-based gesture recognition has recently gained interest as a possible input modality for many emerging ubiquitous computing applications. Unlike the continuous control commonly employed in powered prostheses, discrete systems seek to recognize the dynamic sequences associated with gestures to generate event-based inputs. More akin to those used in general-purpose human-computer interaction, these could include, for example, a flick of the wrist to dismiss a phone call or a double tap of the index finger and thumb to silence an alarm. Moelectric control systems have been shown to achieve near-perfect classification accuracy, but in highly constrained offline settings. Real-world, online systems are subject to 'confounding factors' (i.e. factors that hinder the real-world robustness of myoelectric control that are not accounted for during typical offline analyses), which inevitably degrade system performance, limiting their practical use. Although these factors have been widely studied in continuous prosthesis control, there has been little exploration of their impacts on discrete myoelectric control systems for emerging applications and use cases. Correspondingly, this work examines, for the first time, three confounding factors and their effect on the robustness of discrete myoelectric control: (1)limb position variability, (2)cross-day use, and a newly identified confound faced by discrete systems (3)gesture elicitation speed. Results from four different discrete myoelectric control architectures: (1) Majority Vote LDA, (2) Dynamic Time Warping, (3) an LSTM network trained with Cross Entropy, and (4) an LSTM network trained with Contrastive Learning, show that classification accuracy is significantly degraded (p<0.05) as a result of each of these confounds. This work establishes that confounding factors are a critical barrier that must be addressed to enable the real-world adoption of discrete myoelectric control for robust and reliable gesture recognition.


Assuntos
Eletromiografia , Gestos , Reconhecimento Automatizado de Padrão , Humanos , Eletromiografia/métodos , Masculino , Reconhecimento Automatizado de Padrão/métodos , Feminino , Adulto , Adulto Jovem , Membros Artificiais
16.
Sensors (Basel) ; 24(9)2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38732808

RESUMO

Currently, surface EMG signals have a wide range of applications in human-computer interaction systems. However, selecting features for gesture recognition models based on traditional machine learning can be challenging and may not yield satisfactory results. Considering the strong nonlinear generalization ability of neural networks, this paper proposes a two-stream residual network model with an attention mechanism for gesture recognition. One branch processes surface EMG signals, while the other processes hand acceleration signals. Segmented networks are utilized to fully extract the physiological and kinematic features of the hand. To enhance the model's capacity to learn crucial information, we introduce an attention mechanism after global average pooling. This mechanism strengthens relevant features and weakens irrelevant ones. Finally, the deep features obtained from the two branches of learning are fused to further improve the accuracy of multi-gesture recognition. The experiments conducted on the NinaPro DB2 public dataset resulted in a recognition accuracy of 88.25% for 49 gestures. This demonstrates that our network model can effectively capture gesture features, enhancing accuracy and robustness across various gestures. This approach to multi-source information fusion is expected to provide more accurate and real-time commands for exoskeleton robots and myoelectric prosthetic control systems, thereby enhancing the user experience and the naturalness of robot operation.


Assuntos
Eletromiografia , Gestos , Redes Neurais de Computação , Humanos , Eletromiografia/métodos , Processamento de Sinais Assistido por Computador , Reconhecimento Automatizado de Padrão/métodos , Aceleração , Algoritmos , Mãos/fisiologia , Aprendizado de Máquina , Fenômenos Biomecânicos/fisiologia
17.
Sensors (Basel) ; 24(9)2024 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-38732933

RESUMO

This paper investigates a method for precise mapping of human arm movements using sEMG signals. A multi-channel approach captures the sEMG signals, which, combined with the accurately calculated joint angles from an Inertial Measurement Unit, allows for action recognition and mapping through deep learning algorithms. Firstly, signal acquisition and processing were carried out, which involved acquiring data from various movements (hand gestures, single-degree-of-freedom joint movements, and continuous joint actions) and sensor placement. Then, interference signals were filtered out through filters, and the signals were preprocessed using normalization and moving averages to obtain sEMG signals with obvious features. Additionally, this paper constructs a hybrid network model, combining Convolutional Neural Networks and Artificial Neural Networks, and employs a multi-feature fusion algorithm to enhance the accuracy of gesture recognition. Furthermore, a nonlinear fitting between sEMG signals and joint angles was established based on a backpropagation neural network, incorporating momentum term and adaptive learning rate adjustments. Finally, based on the gesture recognition and joint angle prediction model, prosthetic arm control experiments were conducted, achieving highly accurate arm movement prediction and execution. This paper not only validates the potential application of sEMG signals in the precise control of robotic arms but also lays a solid foundation for the development of more intuitive and responsive prostheses and assistive devices.


Assuntos
Algoritmos , Braço , Eletromiografia , Movimento , Redes Neurais de Computação , Processamento de Sinais Assistido por Computador , Humanos , Eletromiografia/métodos , Braço/fisiologia , Movimento/fisiologia , Gestos , Masculino , Adulto
18.
Sensors (Basel) ; 24(8)2024 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-38676024

RESUMO

In recent decades, technological advancements have transformed the industry, highlighting the efficiency of automation and safety. The integration of augmented reality (AR) and gesture recognition has emerged as an innovative approach to create interactive environments for industrial equipment. Gesture recognition enhances AR applications by allowing intuitive interactions. This study presents a web-based architecture for the integration of AR and gesture recognition, designed to interact with industrial equipment. Emphasizing hardware-agnostic compatibility, the proposed structure offers an intuitive interaction with equipment control systems through natural gestures. Experimental validation, conducted using Google Glass, demonstrated the practical viability and potential of this approach in industrial operations. The development focused on optimizing the system's software and implementing techniques such as normalization, clamping, conversion, and filtering to achieve accurate and reliable gesture recognition under different usage conditions. The proposed approach promotes safer and more efficient industrial operations, contributing to research in AR and gesture recognition. Future work will include improving the gesture recognition accuracy, exploring alternative gestures, and expanding the platform integration to improve the user experience.


Assuntos
Realidade Aumentada , Gestos , Humanos , Indústrias , Software , Reconhecimento Automatizado de Padrão/métodos , Interface Usuário-Computador
19.
Heliyon ; 10(5): e27108, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38562498

RESUMO

Continuous gesture recognition can be used to enhance human-computer interaction. This can be accomplished by capturing human movement with the use of the Inertial Measurement Units in smartphones and using machine learning algorithms to predict the intended gestures. Echo State Networks (ESNs) consist of a fixed internal reservoir that is able to generate rich and diverse nonlinear dynamics in response to input signals that capture temporal dependencies within the signal. This makes ESNs well-suited for time series prediction tasks, such as continuous gesture recognition. However, their application has not been rigorously explored, with regard to gesture recognition. In this study, we sought to enhance the efficacy of ESN models in continuous gesture recognition by exploring diverse model structures, fine-tuning hyperparameters, and experimenting with various training approaches. We used three different training schemes that used the Leave-one-out Cross-validation (LOOCV) protocol to investigate the performance in real-world scenarios with different levels of data availability: Leaving out data from one user to use for testing (F1-score: 0.89), leaving out a fraction of data from all users to use in testing (F1-score: 0.96), and training and testing using LOOCV on a single user (F1-score: 0.99). The obtained results outperformed the Long Short-Term Memory (LSTM) performance from past research (F1-score: 0.87) while maintaining a low training time of approximately 13 seconds compared to 63 seconds for the LSTM model. Additionally, we further explored the performance of the ESN models through behaviour space analysis using memory capacity, Kernel Rank, and Generalization Rank. Our results demonstrate that ESNs can be optimized to achieve high performance on gesture recognition in mobile devices on multiple levels of data availability. These findings highlight the practical ability of ESNs to enhance human-computer interaction.

20.
ACS Nano ; 18(16): 10818-10828, 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38597459

RESUMO

Rapid advancements in immersive communications and artificial intelligence have created a pressing demand for high-performance tactile sensing gloves capable of delivering high sensitivity and a wide sensing range. Unfortunately, existing tactile sensing gloves fall short in terms of user comfort and are ill-suited for underwater applications. To address these limitations, we propose a flexible hand gesture recognition glove (GRG) that contains high-performance micropillar tactile sensors (MPTSs) inspired by the flexible tube foot of a starfish. The as-prepared flexible sensors offer a wide working range (5 Pa to 450 kPa), superfast response time (23 ms), reliable repeatability (∼10000 cycles), and a low limit of detection. Furthermore, these MPTSs are waterproof, which makes them well-suited for underwater applications. By integrating the high-performance MPTSs with a machine learning algorithm, the proposed GRG system achieves intelligent recognition of 16 hand gestures under water, which significantly extends real-time and effective communication capabilities for divers. The GRG system holds tremendous potential for a wide range of applications in the field of underwater communications.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA