Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 327
Filtrar
1.
Sensors (Basel) ; 24(15)2024 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-39123896

RESUMO

For successful human-robot collaboration, it is crucial to establish and sustain quality interaction between humans and robots, making it essential to facilitate human-robot interaction (HRI) effectively. The evolution of robot intelligence now enables robots to take a proactive role in initiating and sustaining HRI, thereby allowing humans to concentrate more on their primary tasks. In this paper, we introduce a system known as the Robot-Facilitated Interaction System (RFIS), where mobile robots are employed to perform identification, tracking, re-identification, and gesture recognition in an integrated framework to ensure anytime readiness for HRI. We implemented the RFIS on an autonomous mobile robot used for transporting a patient, to demonstrate proactive, real-time, and user-friendly interaction with a caretaker involved in monitoring and nursing the patient. In the implementation, we focused on the efficient and robust integration of various interaction facilitation modules within a real-time HRI system that operates in an edge computing environment. Experimental results show that the RFIS, as a comprehensive system integrating caretaker recognition, tracking, re-identification, and gesture recognition, can provide an overall high quality of interaction in HRI facilitation with average accuracies exceeding 90% during real-time operations at 5 FPS.


Assuntos
Gestos , Robótica , Robótica/métodos , Humanos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Inteligência Artificial
2.
Sensors (Basel) ; 24(15)2024 Aug 04.
Artigo em Inglês | MEDLINE | ID: mdl-39124090

RESUMO

Human-Machine Interfaces (HMIs) have gained popularity as they allow for an effortless and natural interaction between the user and the machine by processing information gathered from a single or multiple sensing modalities and transcribing user intentions to the desired actions. Their operability depends on frequent periodic re-calibration using newly acquired data due to their adaptation needs in dynamic environments, where test-time data continuously change in unforeseen ways, a cause that significantly contributes to their abandonment and remains unexplored by the Ultrasound-based (US-based) HMI community. In this work, we conduct a thorough investigation of Unsupervised Domain Adaptation (UDA) algorithms for the re-calibration of US-based HMIs during within-day sessions, which utilize unlabeled data for re-calibration. Our experimentation led us to the proposal of a CNN-based architecture for simultaneous wrist rotation angle and finger gesture prediction that achieves comparable performance with the state-of-the-art while featuring 87.92% less trainable parameters. According to our findings, DANN (a Domain-Adversarial training algorithm), with proper initialization, offers an average 24.99% classification accuracy performance enhancement when compared to no re-calibration setting. However, our results suggest that in cases where the experimental setup and the UDA configuration may differ, observed enhancements would be rather small or even unnoticeable.


Assuntos
Algoritmos , Ultrassonografia , Humanos , Ultrassonografia/métodos , Interface Usuário-Computador , Punho/fisiologia , Punho/diagnóstico por imagem , Redes Neurais de Computação , Dedos/fisiologia , Sistemas Homem-Máquina , Gestos
3.
Adv Sci (Weinh) ; : e2402175, 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38981031

RESUMO

A self-powered mechanoreceptor array is demonstrated using four mechanoreceptor cells for recognition of dynamic touch gestures. Each cell consists of a triboelectric nanogenerator (TENG) for touch sensing and a bi-stable resistor (biristor) for spike encoding. It produces informative spike signals by sensing a force of an external touch and encoding the force into the number of spikes. An array of the mechanoreceptor cells is utilized to monitor various touch gestures and it successfully generated spike signals corresponding to all the gestures. To validate the practicality of the mechanoreceptor array, a spiking neural network (SNN), highly attractive for power consumption compared to the conventional von Neumann architecture, is used for the identification of touch gestures. The measured spiking signals are reflected as inputs for the SNN simulations. Consequently, touch gestures are classified with a high accuracy rate of 92.5%. The proposed mechanoreceptor array emerges as a promising candidate for a building block of tactile in-sensor computing in the era of the Internet of Things (IoT), due to the low cost and high manufacturability of the TENG. This eliminates the need for a power supply, coupled with the intrinsic high throughput of the Si-based biristor employing complementary metal-oxide-semiconductor (CMOS) technology.

4.
Int J Biol Macromol ; 276(Pt 1): 133802, 2024 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-38992552

RESUMO

Pursuing high-performance conductive hydrogels is still hot topic in development of advanced flexible wearable devices. Herein, a tough, self-healing, adhesive double network (DN) conductive hydrogel (named as OSA-(Gelatin/PAM)-Ca, O-(G/P)-Ca) was prepared by bridging gelatin and polyacrylamide network with functionalized polysaccharide (oxidized sodium alginate, OSA) through Schiff base reaction. Thanks to the presence of multiple interactions (Schiff base bond, hydrogen bond, and metal coordination) within the network, the prepared hydrogel showed outstanding mechanical properties (tensile strain of 2800 % and stress of 630 kPa), high conductivity (0.72 S/m), repeatable adhesion performance and excellent self-healing ability (83.6 %/79.0 % of the original tensile strain/stress after self-healing). Moreover, the hydrogel-based sensor exhibited high strain sensitivity (GF = 3.66) and fast response time (<0.5 s), which can be used to monitor a wide range of human physiological signals. Based on this, excellent compression sensitivity (GF = 0.41 kPa-1 in the range of 90-120 kPa), a three-dimensional (3D) array of flexible sensor was designed to monitor the intensity of pressure and spatial force distribution. In addition, a gel-based wearable sensor was accurately classified and recognized ten types of gestures, achieving an accuracy rate of >96.33 % both before and after self-healing under three machine learning models (the decision tree, SVM, and KNN). This paper provides a simple method to prepare tough and self-healing conductive hydrogel as flexible multifunctional sensor devices for versatile applications in fields such as healthcare monitoring, human-computer interaction, and artificial intelligence.

5.
ACS Sens ; 2024 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-39068608

RESUMO

Thermoelectric (TE) hydrogels, mimicking human skin, possessing temperature and strain sensing capabilities, are well-suited for human-machine interaction interfaces and wearable devices. In this study, a TE hydrogel with high toughness and temperature responsiveness was created using the Hofmeister effect and TE current effect, achieved through the cross-linking of PVA/PAA/carboxymethyl cellulose triple networks. The Hofmeister effect, facilitated by Na+ and SO42- ions coordination, notably increased the hydrogel's tensile strength (800 kPa). Introduction of Fe2+/Fe3+ as redox pairs conferred a high Seebeck coefficient (2.3 mV K-1), thereby enhancing temperature responsiveness. Using this dual-responsive sensor, successful demonstration of a feedback mechanism combining deep learning with a robotic hand was accomplished (with a recognition accuracy of 95.30%), alongside temperature warnings at various levels. It is expected to replace manual work through the control of the manipulator in some high-temperature and high-risk scenarios, thereby improving the safety factor, underscoring the vast potential of TE hydrogel sensors in motion monitoring and human-machine interaction applications.

6.
Front Neurosci ; 18: 1306047, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39050666

RESUMO

The surface electromyographic (sEMG) signals reflect human motor intention and can be utilized for human-machine interfaces (HMI). Comparing to the sparse multi-channel (SMC) electrodes, the high-density (HD) electrodes have a large number of electrodes and compact space between electrodes, which can achieve more sEMG information and have the potential to achieve higher performance in myocontrol. However, when the HD electrodes grid shift or damage, it will affect gesture recognition and reduce recognition accuracy. To minimize the impact resulting from the electrodes shift and damage, we proposed an attention deep fast convolutional neural network (attention-DFCNN) model by utilizing the temporary and spatial characteristics of high-density surface electromyography (HD-sEMG) signals. Contrary to the previous methods, which are mostly base on sEMG temporal features, the attention-DFCNN model can improve the robustness and stability by combining the spatial and temporary features. The performance of the proposed model was compared with other classical method and deep learning methods. We used the dataset provided by The University Medical Center Göttingen. Seven able-bodied subjects and one amputee involved in this work. Each subject executed nine gestures under the electrodes shift (10 mm) and damage (6 channels). As for the electrodes shift 10 mm in four directions (inwards; onwards; upwards; downwards) on seven able-bodied subjects, without any pre-training, the average accuracy of attention-DFCNN (0.942 ± 0.04) is significantly higher than LSDA (0.910 ± 0.04, p < 0.01), CNN (0.920 ± 0.05, p < 0.01), TCN (0.840 ± 0.07, p < 0.01), LSTM (0.864 ± 0.08, p < 0.01), attention-BiLSTM (0.852 ± 0.07, p < 0.01), Transformer (0.903 ± 0.07, p < 0.01) and Swin-Transformer (0.908 ± 0.09, p < 0.01). The proposed attention-DFCNN algorithm and the way of combining the spatial and temporary features of sEMG signals can significantly improve the recognition rate when the HD electrodes grid shift or damage during wear.

7.
Sensors (Basel) ; 24(13)2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-39000981

RESUMO

This work presents a novel approach for elbow gesture recognition using an array of inductive sensors and a machine learning algorithm (MLA). This paper describes the design of the inductive sensor array integrated into a flexible and wearable sleeve. The sensor array consists of coils sewn onto the sleeve, which form an LC tank circuit along with the externally connected inductors and capacitors. Changes in the elbow position modulate the inductance of these coils, allowing the sensor array to capture a range of elbow movements. The signal processing and random forest MLA to recognize 10 different elbow gestures are described. Rigorous evaluation on 8 subjects and data augmentation, which leveraged the dataset to 1270 trials per gesture, enabled the system to achieve remarkable accuracy of 98.3% and 98.5% using 5-fold cross-validation and leave-one-subject-out cross-validation, respectively. The test performance was then assessed using data collected from five new subjects. The high classification accuracy of 94% demonstrates the generalizability of the designed system. The proposed solution addresses the limitations of existing elbow gesture recognition designs and offers a practical and effective approach for intuitive human-machine interaction.


Assuntos
Algoritmos , Cotovelo , Gestos , Aprendizado de Máquina , Humanos , Cotovelo/fisiologia , Dispositivos Eletrônicos Vestíveis , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Masculino , Adulto , Feminino
9.
Comput Biol Med ; 179: 108817, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39004049

RESUMO

Force myography (FMG) is increasingly gaining importance in gesture recognition because of it's ability to achieve high classification accuracy without having a direct contact with the skin. In this study, we investigate the performance of a bracelet with only six commercial force sensitive resistors (FSR) sensors for classifying many hand gestures representing all letters and numbers from 0 to 10 in the American sign language. For this, we introduce an optimized feature selection in combination with the Extreme Learning Machine (ELM) as a classifier by investigating three swarm intelligence algorithms, which are the binary grey wolf optimizer (BGWO), binary grasshopper optimizer (BGOA), and binary hybrid grey wolf particle swarm optimizer (BGWOPSO), which is used as an optimization method for ELM for the first time in this study. The findings reveal that the BGWOPSO, in which PSO supports the GWO optimizer by controlling its exploration and exploitation using inertia constant to improve the convergence speed to reach the best global optima, outperformed the other investigated algorithms. In addition, the results show that optimizing ELM with BGWOPSO for feature selection can efficiently improve the performance of ELM to enhance the classification accuracy from 32% to 69.84% for classifying 37 gestures collected from multiple volunteers and using only a band with 6 FSR sensors.


Assuntos
Algoritmos , Gestos , Humanos , Aprendizado de Máquina , Miografia/métodos , Masculino , Feminino
10.
Math Biosci Eng ; 21(4): 5712-5734, 2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38872555

RESUMO

This research introduces a novel dual-pathway convolutional neural network (DP-CNN) architecture tailored for robust performance in Log-Mel spectrogram image analysis derived from raw multichannel electromyography signals. The primary objective is to assess the effectiveness of the proposed DP-CNN architecture across three datasets (NinaPro DB1, DB2, and DB3), encompassing both able-bodied and amputee subjects. Performance metrics, including accuracy, precision, recall, and F1-score, are employed for comprehensive evaluation. The DP-CNN demonstrates notable mean accuracies of 94.93 ± 1.71% and 94.00 ± 3.65% on NinaPro DB1 and DB2 for healthy subjects, respectively. Additionally, it achieves a robust mean classification accuracy of 85.36 ± 0.82% on amputee subjects in DB3, affirming its efficacy. Comparative analysis with previous methodologies on the same datasets reveals substantial improvements of 28.33%, 26.92%, and 39.09% over the baseline for DB1, DB2, and DB3, respectively. The DP-CNN's superior performance extends to comparisons with transfer learning models for image classification, reaffirming its efficacy. Across diverse datasets involving both able-bodied and amputee subjects, the DP-CNN exhibits enhanced capabilities, holding promise for advancing myoelectric control.


Assuntos
Algoritmos , Amputados , Eletromiografia , Gestos , Redes Neurais de Computação , Processamento de Sinais Assistido por Computador , Extremidade Superior , Humanos , Eletromiografia/métodos , Extremidade Superior/fisiologia , Masculino , Adulto , Feminino , Adulto Jovem , Pessoa de Meia-Idade , Reprodutibilidade dos Testes
11.
J Electr Bioimpedance ; 15(1): 63-74, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38863504

RESUMO

Gesture recognition is a crucial aspect in the advancement of virtual reality, healthcare, and human-computer interaction, and requires innovative methodologies to meet the increasing demands for precision. This paper presents a novel approach that combines Impedance Signal Spectrum Analysis (ISSA) with machine learning to improve gesture recognition precision. A diverse dataset that included participants from various demographic backgrounds (five individuals) who were each executing a range of predefined gestures. The predefined gestures were designed to encompass a broad spectrum of hand movements, including intricate and subtle variations, to challenge the robustness of the proposed methodology. The machine learning model using the K-Nearest Neighbors (KNN), Gradient Boosting Machine (GBM), Naive Bayes (NB), Logistic Regression (LR), Random Forest (RF), and Support Vector Machine (SVM) algorithms demonstrated notable precision in performance evaluations. The individual accuracy values for each algorithm are as follows: KNN, 86%; GBM, 86%; NB, 84%; LR, 89%; RF, 87%; and SVM, 87%. These results emphasize the importance of impedance features in the refinement of gesture recognition. The adaptability of the model was confirmed under different conditions, highlighting its broad applicability.

12.
Sensors (Basel) ; 24(12)2024 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-38931754

RESUMO

Electromyography-based gesture recognition has become a challenging problem in the decoding of fine hand movements. Recent research has focused on improving the accuracy of gesture recognition by increasing the complexity of network models. However, training a complex model necessitates a significant amount of data, thereby escalating both user burden and computational costs. Moreover, owing to the considerable variability of surface electromyography (sEMG) signals across different users, conventional machine learning approaches reliant on a single feature fail to meet the demand for precise gesture recognition tailored to individual users. Therefore, to solve the problems of large computational cost and poor cross-user pattern recognition performance, we propose a feature selection method that combines mutual information, principal component analysis and the Pearson correlation coefficient (MPP). This method can filter out the optimal subset of features that match a specific user while combining with an SVM classifier to accurately and efficiently recognize the user's gesture movements. To validate the effectiveness of the above method, we designed an experiment including five gesture actions. The experimental results show that compared to the classification accuracy obtained using a single feature, we achieved an improvement of about 5% with the optimally selected feature as the input to any of the classifiers. This study provides an effective guarantee for user-specific fine hand movement decoding based on sEMG signals.


Assuntos
Eletromiografia , Antebraço , Gestos , Mãos , Reconhecimento Automatizado de Padrão , Humanos , Eletromiografia/métodos , Mãos/fisiologia , Antebraço/fisiologia , Reconhecimento Automatizado de Padrão/métodos , Masculino , Adulto , Análise de Componente Principal , Feminino , Algoritmos , Movimento/fisiologia , Adulto Jovem , Máquina de Vetores de Suporte , Aprendizado de Máquina
13.
Sensors (Basel) ; 24(11)2024 May 25.
Artigo em Inglês | MEDLINE | ID: mdl-38894205

RESUMO

By integrating sensing capability into wireless communication, wireless sensing technology has become a promising contactless and non-line-of-sight sensing paradigm to explore the dynamic characteristics of channel state information (CSI) for recognizing human behaviors. In this paper, we develop an effective device-free human gesture recognition (HGR) system based on WiFi wireless sensing technology in which the complementary CSI amplitude and phase of communication link are jointly exploited. To improve the quality of collected CSI, a linear transform-based data processing method is first used to eliminate the phase offset and noise and to reduce the impact of multi-path effects. Then, six different time and frequency domain features are chosen for both amplitude and phase, including the mean, variance, root mean square, interquartile range, energy entropy and power spectral entropy, and a feature selection algorithm to remove irrelevant and redundant features is proposed based on filtering and principal component analysis methods, resulting in the construction of a feature subspace to distinguish different gestures. On this basis, a support vector machine-based stacking algorithm is proposed for gesture classification based on the selected and complementary amplitude and phase features. Lastly, we conduct experiments under a practical scenario with one transmitter and receiver. The results demonstrate that the average accuracy of the proposed HGR system is 98.3% and that the F1-score is over 97%.

14.
Sensors (Basel) ; 24(11)2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38894423

RESUMO

Gesture recognition using electromyography (EMG) signals has prevailed recently in the field of human-computer interactions for controlling intelligent prosthetics. Currently, machine learning and deep learning are the two most commonly employed methods for classifying hand gestures. Despite traditional machine learning methods already achieving impressive performance, it is still a huge amount of work to carry out feature extraction manually. The existing deep learning methods utilize complex neural network architectures to achieve higher accuracy, which will suffer from overfitting, insufficient adaptability, and low recognition accuracy. To improve the existing phenomenon, a novel lightweight model named dual stream LSTM feature fusion classifier is proposed based on the concatenation of five time-domain features of EMG signals and raw data, which are both processed with one-dimensional convolutional neural networks and LSTM layers to carry out the classification. The proposed method can effectively capture global features of EMG signals using a simple architecture, which means less computational cost. An experiment is conducted on a public DB1 dataset with 52 gestures, and each of the 27 subjects repeats every gesture 10 times. The accuracy rate achieved by the model is 89.66%, which is comparable to that achieved by more complex deep learning neural networks, and the inference time for each gesture is 87.6 ms, which can also be implied in a real-time control system. The proposed model is validated using a subject-wise experiment on 10 out of the 40 subjects in the DB2 dataset, achieving a mean accuracy of 91.74%. This is illustrated by its ability to fuse time-domain features and raw data to extract more effective information from the sEMG signal and select an appropriate, efficient, lightweight network to enhance the recognition results.


Assuntos
Aprendizado Profundo , Eletromiografia , Gestos , Redes Neurais de Computação , Eletromiografia/métodos , Humanos , Processamento de Sinais Assistido por Computador , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Aprendizado de Máquina , Mãos/fisiologia , Memória de Curto Prazo/fisiologia
15.
Sensors (Basel) ; 24(11)2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38894429

RESUMO

Effective feature extraction and selection are crucial for the accurate classification and prediction of hand gestures based on electromyographic signals. In this paper, we systematically compare six filter and wrapper feature evaluation methods and investigate their respective impacts on the accuracy of gesture recognition. The investigation is based on several benchmark datasets and one real hand gesture dataset, including 15 hand force exercises collected from 14 healthy subjects using eight commercial sEMG sensors. A total of 37 time- and frequency-domain features were extracted from each sEMG channel. The benchmark dataset revealed that the minimum Redundancy Maximum Relevance (mRMR) feature evaluation method had the poorest performance, resulting in a decrease in classification accuracy. However, the RFE method demonstrated the potential to enhance classification accuracy across most of the datasets. It selected a feature subset comprising 65 features, which led to an accuracy of 97.14%. The Mutual Information (MI) method selected 200 features to reach an accuracy of 97.38%. The Feature Importance (FI) method reached a higher accuracy of 97.62% but selected 140 features. Further investigations have shown that selecting 65 and 75 features with the RFE methods led to an identical accuracy of 97.14%. A thorough examination of the selected features revealed the potential for three additional features from three specific sensors to enhance the classification accuracy to 97.38%. These results highlight the significance of employing an appropriate feature selection method to significantly reduce the number of necessary features while maintaining classification accuracy. They also underscore the necessity for further analysis and refinement to achieve optimal solutions.


Assuntos
Eletromiografia , Gestos , Mãos , Humanos , Eletromiografia/métodos , Mãos/fisiologia , Algoritmos , Masculino , Adulto , Feminino , Processamento de Sinais Assistido por Computador
16.
Biomed Tech (Berl) ; 2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38826069

RESUMO

OBJECTIVES: The objective of this study is to develop a system for automatic sign language recognition to improve the quality of life for the mute-deaf community in Egypt. The system aims to bridge the communication gap by identifying and converting right-hand gestures into audible sounds or displayed text. METHODS: To achieve the objectives, a convolutional neural network (CNN) model is employed. The model is trained to recognize right-hand gestures captured by an affordable web camera. A dataset was created with the help of six volunteers for training, testing, and validation purposes. RESULTS: The proposed system achieved an impressive average accuracy of 99.65 % in recognizing right-hand gestures, with high precision value of 95.11 %. The system effectively addressed the issue of gesture similarity between certain alphabets by successfully distinguishing between their respective gestures. CONCLUSIONS: The proposed system offers a promising solution for automatic sign language recognition, benefiting the mute-deaf community in Egypt. By accurately identifying and converting right-hand gestures, the system facilitates communication and interaction with the wider world. This technology has the potential to greatly enhance the quality of life for individuals who are unable to speak or hear, promoting inclusivity and accessibility.

17.
J Neuroeng Rehabil ; 21(1): 100, 2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38867287

RESUMO

BACKGROUND: In-home rehabilitation systems are a promising, potential alternative to conventional therapy for stroke survivors. Unfortunately, physiological differences between participants and sensor displacement in wearable sensors pose a significant challenge to classifier performance, particularly for people with stroke who may encounter difficulties repeatedly performing trials. This makes it challenging to create reliable in-home rehabilitation systems that can accurately classify gestures. METHODS: Twenty individuals who suffered a stroke performed seven different gestures (mass flexion, mass extension, wrist volar flexion, wrist dorsiflexion, forearm pronation, forearm supination, and rest) related to activities of daily living. They performed these gestures while wearing EMG sensors on the forearm, as well as FMG sensors and an IMU on the wrist. We developed a model based on prototypical networks for one-shot transfer learning, K-Best feature selection, and increased window size to improve model accuracy. Our model was evaluated against conventional transfer learning with neural networks, as well as subject-dependent and subject-independent classifiers: neural networks, LGBM, LDA, and SVM. RESULTS: Our proposed model achieved 82.2% hand-gesture classification accuracy, which was better (P<0.05) than one-shot transfer learning with neural networks (63.17%), neural networks (59.72%), LGBM (65.09%), LDA (63.35%), and SVM (54.5%). In addition, our model performed similarly to subject-dependent classifiers, slightly lower than SVM (83.84%) but higher than neural networks (81.62%), LGBM (80.79%), and LDA (74.89%). Using K-Best features improved the accuracy in 3 of the 6 classifiers used for evaluation, while not affecting the accuracy in the other classifiers. Increasing the window size improved the accuracy of all the classifiers by an average of 4.28%. CONCLUSION: Our proposed model showed significant improvements in hand-gesture recognition accuracy in individuals who have had a stroke as compared with conventional transfer learning, neural networks and traditional machine learning approaches. In addition, K-Best feature selection and increased window size can further improve the accuracy. This approach could help to alleviate the impact of physiological differences and create a subject-independent model for stroke survivors that improves the classification accuracy of wearable sensors. TRIAL REGISTRATION NUMBER: The study was registered in Chinese Clinical Trial Registry with registration number CHiCTR1800017568 in 2018/08/04.


Assuntos
Gestos , Mãos , Redes Neurais de Computação , Reabilitação do Acidente Vascular Cerebral , Humanos , Reabilitação do Acidente Vascular Cerebral/métodos , Reabilitação do Acidente Vascular Cerebral/instrumentação , Mãos/fisiopatologia , Masculino , Feminino , Pessoa de Meia-Idade , Acidente Vascular Cerebral/complicações , Acidente Vascular Cerebral/fisiopatologia , Idoso , Aprendizado de Máquina , Transferência de Experiência/fisiologia , Adulto , Eletromiografia , Dispositivos Eletrônicos Vestíveis
18.
Comput Struct Biotechnol J ; 24: 393-403, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38800692

RESUMO

Background and objective: Medical image visualization is a requirement in many types of surgery such as orthopaedic, spinal, thoracic procedures or tumour resection to eliminate risk such as "wrong level surgery". However, direct contact with physical devices such as mice or touch screens to control images is a challenge because of the potential risk of infection. To prevent the spread of infection in sterile environments, a contagious infection-free medical interaction system has been developed for manipulating medical images. Methods: We proposed an integrated system with three key modules: hand landmark detection, hand pointing, and hand gesture recognition. A proposed depth enhancement algorithm is combined with a deep learning hand landmark detector to generate hand landmarks. Based on the designed system, a proposed hand-pointing system combined with projection and ray-pointing techniques allows for reducing fatigue during manipulation. A proposed landmark geometry constraint algorithm and deep learning method were applied to detect six gestures including click, open, close, zoom, drag, and rotation. Additionally, a control menu was developed to effectively activate common functions. Results: The proposed hand-pointing system allowed for a large control range of up to 1200 mm in both vertical and horizontal direction. The proposed hand gesture recognition method showed high accuracy of over 97% and real-time response. Conclusion: This paper described the contagious infection-free medical interaction system that enables precise and effective manipulation of medical images within the large control range, while minimizing hand fatigue.

19.
J Neural Eng ; 21(3)2024 May 17.
Artigo em Inglês | MEDLINE | ID: mdl-38722304

RESUMO

Discrete myoelectric control-based gesture recognition has recently gained interest as a possible input modality for many emerging ubiquitous computing applications. Unlike the continuous control commonly employed in powered prostheses, discrete systems seek to recognize the dynamic sequences associated with gestures to generate event-based inputs. More akin to those used in general-purpose human-computer interaction, these could include, for example, a flick of the wrist to dismiss a phone call or a double tap of the index finger and thumb to silence an alarm. Moelectric control systems have been shown to achieve near-perfect classification accuracy, but in highly constrained offline settings. Real-world, online systems are subject to 'confounding factors' (i.e. factors that hinder the real-world robustness of myoelectric control that are not accounted for during typical offline analyses), which inevitably degrade system performance, limiting their practical use. Although these factors have been widely studied in continuous prosthesis control, there has been little exploration of their impacts on discrete myoelectric control systems for emerging applications and use cases. Correspondingly, this work examines, for the first time, three confounding factors and their effect on the robustness of discrete myoelectric control: (1)limb position variability, (2)cross-day use, and a newly identified confound faced by discrete systems (3)gesture elicitation speed. Results from four different discrete myoelectric control architectures: (1) Majority Vote LDA, (2) Dynamic Time Warping, (3) an LSTM network trained with Cross Entropy, and (4) an LSTM network trained with Contrastive Learning, show that classification accuracy is significantly degraded (p<0.05) as a result of each of these confounds. This work establishes that confounding factors are a critical barrier that must be addressed to enable the real-world adoption of discrete myoelectric control for robust and reliable gesture recognition.


Assuntos
Eletromiografia , Gestos , Reconhecimento Automatizado de Padrão , Humanos , Eletromiografia/métodos , Masculino , Reconhecimento Automatizado de Padrão/métodos , Feminino , Adulto , Adulto Jovem , Membros Artificiais
20.
Sensors (Basel) ; 24(9)2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38732808

RESUMO

Currently, surface EMG signals have a wide range of applications in human-computer interaction systems. However, selecting features for gesture recognition models based on traditional machine learning can be challenging and may not yield satisfactory results. Considering the strong nonlinear generalization ability of neural networks, this paper proposes a two-stream residual network model with an attention mechanism for gesture recognition. One branch processes surface EMG signals, while the other processes hand acceleration signals. Segmented networks are utilized to fully extract the physiological and kinematic features of the hand. To enhance the model's capacity to learn crucial information, we introduce an attention mechanism after global average pooling. This mechanism strengthens relevant features and weakens irrelevant ones. Finally, the deep features obtained from the two branches of learning are fused to further improve the accuracy of multi-gesture recognition. The experiments conducted on the NinaPro DB2 public dataset resulted in a recognition accuracy of 88.25% for 49 gestures. This demonstrates that our network model can effectively capture gesture features, enhancing accuracy and robustness across various gestures. This approach to multi-source information fusion is expected to provide more accurate and real-time commands for exoskeleton robots and myoelectric prosthetic control systems, thereby enhancing the user experience and the naturalness of robot operation.


Assuntos
Eletromiografia , Gestos , Redes Neurais de Computação , Humanos , Eletromiografia/métodos , Processamento de Sinais Assistido por Computador , Reconhecimento Automatizado de Padrão/métodos , Aceleração , Algoritmos , Mãos/fisiologia , Aprendizado de Máquina , Fenômenos Biomecânicos/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA