Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 330
Filtrar
Más filtros

Tipo del documento
Intervalo de año de publicación
1.
Skin Res Technol ; 30(2): e13625, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38385865

RESUMEN

INTRODUCTION: The application of artificial intelligence to facial aesthetics has been limited by the inability to discern facial zones of interest, as defined by complex facial musculature and underlying structures. Although semantic segmentation models (SSMs) could potentially overcome this limitation, existing facial SSMs distinguish only three to nine facial zones of interest. METHODS: We developed a new supervised SSM, trained on 669 high-resolution clinical-grade facial images; a subset of these images was used in an iterative process between facial aesthetics experts and manual annotators that defined and labeled 33 facial zones of interest. RESULTS: Because some zones overlap, some pixels are included in multiple zones, violating the one-to-one relationship between a given pixel and a specific class (zone) required for SSMs. The full facial zone model was therefore used to create three sub-models, each with completely non-overlapping zones, generating three outputs for each input image that can be treated as standalone models. For each facial zone, the output demonstrating the best Intersection Over Union (IOU) value was selected as the winning prediction. CONCLUSIONS: The new SSM demonstrates mean IOU values superior to manual annotation and landmark analyses, and it is more robust than landmark methods in handling variances in facial shape and structure.


Asunto(s)
Inteligencia Artificial , Semántica , Humanos , Cara/diagnóstico por imagen , Músculos Faciales
2.
J Neuroeng Rehabil ; 21(1): 100, 2024 Jun 12.
Artículo en Inglés | MEDLINE | ID: mdl-38867287

RESUMEN

BACKGROUND: In-home rehabilitation systems are a promising, potential alternative to conventional therapy for stroke survivors. Unfortunately, physiological differences between participants and sensor displacement in wearable sensors pose a significant challenge to classifier performance, particularly for people with stroke who may encounter difficulties repeatedly performing trials. This makes it challenging to create reliable in-home rehabilitation systems that can accurately classify gestures. METHODS: Twenty individuals who suffered a stroke performed seven different gestures (mass flexion, mass extension, wrist volar flexion, wrist dorsiflexion, forearm pronation, forearm supination, and rest) related to activities of daily living. They performed these gestures while wearing EMG sensors on the forearm, as well as FMG sensors and an IMU on the wrist. We developed a model based on prototypical networks for one-shot transfer learning, K-Best feature selection, and increased window size to improve model accuracy. Our model was evaluated against conventional transfer learning with neural networks, as well as subject-dependent and subject-independent classifiers: neural networks, LGBM, LDA, and SVM. RESULTS: Our proposed model achieved 82.2% hand-gesture classification accuracy, which was better (P<0.05) than one-shot transfer learning with neural networks (63.17%), neural networks (59.72%), LGBM (65.09%), LDA (63.35%), and SVM (54.5%). In addition, our model performed similarly to subject-dependent classifiers, slightly lower than SVM (83.84%) but higher than neural networks (81.62%), LGBM (80.79%), and LDA (74.89%). Using K-Best features improved the accuracy in 3 of the 6 classifiers used for evaluation, while not affecting the accuracy in the other classifiers. Increasing the window size improved the accuracy of all the classifiers by an average of 4.28%. CONCLUSION: Our proposed model showed significant improvements in hand-gesture recognition accuracy in individuals who have had a stroke as compared with conventional transfer learning, neural networks and traditional machine learning approaches. In addition, K-Best feature selection and increased window size can further improve the accuracy. This approach could help to alleviate the impact of physiological differences and create a subject-independent model for stroke survivors that improves the classification accuracy of wearable sensors. TRIAL REGISTRATION NUMBER: The study was registered in Chinese Clinical Trial Registry with registration number CHiCTR1800017568 in 2018/08/04.


Asunto(s)
Gestos , Mano , Redes Neurales de la Computación , Rehabilitación de Accidente Cerebrovascular , Humanos , Rehabilitación de Accidente Cerebrovascular/métodos , Rehabilitación de Accidente Cerebrovascular/instrumentación , Mano/fisiopatología , Masculino , Femenino , Persona de Mediana Edad , Accidente Cerebrovascular/complicaciones , Accidente Cerebrovascular/fisiopatología , Anciano , Aprendizaje Automático , Transferencia de Experiencia en Psicología/fisiología , Adulto , Electromiografía , Dispositivos Electrónicos Vestibles
3.
Sensors (Basel) ; 24(13)2024 Jun 28.
Artículo en Inglés | MEDLINE | ID: mdl-39000981

RESUMEN

This work presents a novel approach for elbow gesture recognition using an array of inductive sensors and a machine learning algorithm (MLA). This paper describes the design of the inductive sensor array integrated into a flexible and wearable sleeve. The sensor array consists of coils sewn onto the sleeve, which form an LC tank circuit along with the externally connected inductors and capacitors. Changes in the elbow position modulate the inductance of these coils, allowing the sensor array to capture a range of elbow movements. The signal processing and random forest MLA to recognize 10 different elbow gestures are described. Rigorous evaluation on 8 subjects and data augmentation, which leveraged the dataset to 1270 trials per gesture, enabled the system to achieve remarkable accuracy of 98.3% and 98.5% using 5-fold cross-validation and leave-one-subject-out cross-validation, respectively. The test performance was then assessed using data collected from five new subjects. The high classification accuracy of 94% demonstrates the generalizability of the designed system. The proposed solution addresses the limitations of existing elbow gesture recognition designs and offers a practical and effective approach for intuitive human-machine interaction.


Asunto(s)
Algoritmos , Codo , Gestos , Aprendizaje Automático , Humanos , Codo/fisiología , Dispositivos Electrónicos Vestibles , Reconocimiento de Normas Patrones Automatizadas/métodos , Procesamiento de Señales Asistido por Computador , Masculino , Adulto , Femenino
4.
Sensors (Basel) ; 24(11)2024 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-38894423

RESUMEN

Gesture recognition using electromyography (EMG) signals has prevailed recently in the field of human-computer interactions for controlling intelligent prosthetics. Currently, machine learning and deep learning are the two most commonly employed methods for classifying hand gestures. Despite traditional machine learning methods already achieving impressive performance, it is still a huge amount of work to carry out feature extraction manually. The existing deep learning methods utilize complex neural network architectures to achieve higher accuracy, which will suffer from overfitting, insufficient adaptability, and low recognition accuracy. To improve the existing phenomenon, a novel lightweight model named dual stream LSTM feature fusion classifier is proposed based on the concatenation of five time-domain features of EMG signals and raw data, which are both processed with one-dimensional convolutional neural networks and LSTM layers to carry out the classification. The proposed method can effectively capture global features of EMG signals using a simple architecture, which means less computational cost. An experiment is conducted on a public DB1 dataset with 52 gestures, and each of the 27 subjects repeats every gesture 10 times. The accuracy rate achieved by the model is 89.66%, which is comparable to that achieved by more complex deep learning neural networks, and the inference time for each gesture is 87.6 ms, which can also be implied in a real-time control system. The proposed model is validated using a subject-wise experiment on 10 out of the 40 subjects in the DB2 dataset, achieving a mean accuracy of 91.74%. This is illustrated by its ability to fuse time-domain features and raw data to extract more effective information from the sEMG signal and select an appropriate, efficient, lightweight network to enhance the recognition results.


Asunto(s)
Aprendizaje Profundo , Electromiografía , Gestos , Redes Neurales de la Computación , Electromiografía/métodos , Humanos , Procesamiento de Señales Asistido por Computador , Reconocimiento de Normas Patrones Automatizadas/métodos , Algoritmos , Aprendizaje Automático , Mano/fisiología , Memoria a Corto Plazo/fisiología
5.
Sensors (Basel) ; 24(15)2024 Aug 04.
Artículo en Inglés | MEDLINE | ID: mdl-39124090

RESUMEN

Human-Machine Interfaces (HMIs) have gained popularity as they allow for an effortless and natural interaction between the user and the machine by processing information gathered from a single or multiple sensing modalities and transcribing user intentions to the desired actions. Their operability depends on frequent periodic re-calibration using newly acquired data due to their adaptation needs in dynamic environments, where test-time data continuously change in unforeseen ways, a cause that significantly contributes to their abandonment and remains unexplored by the Ultrasound-based (US-based) HMI community. In this work, we conduct a thorough investigation of Unsupervised Domain Adaptation (UDA) algorithms for the re-calibration of US-based HMIs during within-day sessions, which utilize unlabeled data for re-calibration. Our experimentation led us to the proposal of a CNN-based architecture for simultaneous wrist rotation angle and finger gesture prediction that achieves comparable performance with the state-of-the-art while featuring 87.92% less trainable parameters. According to our findings, DANN (a Domain-Adversarial training algorithm), with proper initialization, offers an average 24.99% classification accuracy performance enhancement when compared to no re-calibration setting. However, our results suggest that in cases where the experimental setup and the UDA configuration may differ, observed enhancements would be rather small or even unnoticeable.


Asunto(s)
Algoritmos , Ultrasonografía , Humanos , Ultrasonografía/métodos , Interfaz Usuario-Computador , Muñeca/fisiología , Muñeca/diagnóstico por imagen , Redes Neurales de la Computación , Dedos/fisiología , Sistemas Hombre-Máquina , Gestos
6.
Sensors (Basel) ; 24(8)2024 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-38676024

RESUMEN

In recent decades, technological advancements have transformed the industry, highlighting the efficiency of automation and safety. The integration of augmented reality (AR) and gesture recognition has emerged as an innovative approach to create interactive environments for industrial equipment. Gesture recognition enhances AR applications by allowing intuitive interactions. This study presents a web-based architecture for the integration of AR and gesture recognition, designed to interact with industrial equipment. Emphasizing hardware-agnostic compatibility, the proposed structure offers an intuitive interaction with equipment control systems through natural gestures. Experimental validation, conducted using Google Glass, demonstrated the practical viability and potential of this approach in industrial operations. The development focused on optimizing the system's software and implementing techniques such as normalization, clamping, conversion, and filtering to achieve accurate and reliable gesture recognition under different usage conditions. The proposed approach promotes safer and more efficient industrial operations, contributing to research in AR and gesture recognition. Future work will include improving the gesture recognition accuracy, exploring alternative gestures, and expanding the platform integration to improve the user experience.


Asunto(s)
Realidad Aumentada , Gestos , Humanos , Industrias , Programas Informáticos , Reconocimiento de Normas Patrones Automatizadas/métodos , Interfaz Usuario-Computador
7.
Sensors (Basel) ; 24(9)2024 Apr 29.
Artículo en Inglés | MEDLINE | ID: mdl-38732933

RESUMEN

This paper investigates a method for precise mapping of human arm movements using sEMG signals. A multi-channel approach captures the sEMG signals, which, combined with the accurately calculated joint angles from an Inertial Measurement Unit, allows for action recognition and mapping through deep learning algorithms. Firstly, signal acquisition and processing were carried out, which involved acquiring data from various movements (hand gestures, single-degree-of-freedom joint movements, and continuous joint actions) and sensor placement. Then, interference signals were filtered out through filters, and the signals were preprocessed using normalization and moving averages to obtain sEMG signals with obvious features. Additionally, this paper constructs a hybrid network model, combining Convolutional Neural Networks and Artificial Neural Networks, and employs a multi-feature fusion algorithm to enhance the accuracy of gesture recognition. Furthermore, a nonlinear fitting between sEMG signals and joint angles was established based on a backpropagation neural network, incorporating momentum term and adaptive learning rate adjustments. Finally, based on the gesture recognition and joint angle prediction model, prosthetic arm control experiments were conducted, achieving highly accurate arm movement prediction and execution. This paper not only validates the potential application of sEMG signals in the precise control of robotic arms but also lays a solid foundation for the development of more intuitive and responsive prostheses and assistive devices.


Asunto(s)
Algoritmos , Brazo , Electromiografía , Movimiento , Redes Neurales de la Computación , Procesamiento de Señales Asistido por Computador , Humanos , Electromiografía/métodos , Brazo/fisiología , Movimiento/fisiología , Gestos , Masculino , Adulto
8.
Sensors (Basel) ; 24(4)2024 Feb 08.
Artículo en Inglés | MEDLINE | ID: mdl-38400278

RESUMEN

Commercial, high-tech upper limb prostheses offer a lot of functionality and are equipped with high-grade control mechanisms. However, they are relatively expensive and are not accessible to the majority of amputees. Therefore, more affordable, accessible, open-source, and 3D-printable alternatives are being developed. A commonly proposed approach to control these prostheses is to use bio-potentials generated by skeletal muscles, which can be measured using surface electromyography (sEMG). However, this control mechanism either lacks accuracy when a single sEMG sensor is used or involves the use of wires to connect to an array of multiple nodes, which hinders patients' movements. In order to mitigate these issues, we have developed a circular, wireless s-EMG array that is able to collect sEMG potentials on an array of electrodes that can be spread (not) uniformly around the circumference of a patient's arm. The modular sEMG system is combined with a Bluetooth Low Energy System on Chip, motion sensors, and a battery. We have benchmarked this system with a commercial, wired, state-of-the-art alternative and found an r = 0.98 (p < 0.01) Spearman correlation between the root-mean-squared (RMS) amplitude of sEMG measurements measured by both devices for the same set of 20 reference gestures, demonstrating that the system is accurate in measuring sEMG. Additionally, we have demonstrated that the RMS amplitudes of sEMG measurements between the different nodes within the array are uncorrelated, indicating that they contain independent information that can be used for higher accuracy in gesture recognition. We show this by training a random forest classifier that can distinguish between 6 gestures with an accuracy of 97%. This work is important for a large and growing group of amputees whose quality of life could be improved using this technology.


Asunto(s)
Amputados , Miembros Artificiales , Humanos , Electromiografía , Calidad de Vida , Músculo Esquelético/fisiología , Gestos , Mano/fisiología
9.
Sensors (Basel) ; 24(4)2024 Feb 16.
Artículo en Inglés | MEDLINE | ID: mdl-38400416

RESUMEN

Interest in developing techniques for acquiring and decoding biological signals is on the rise in the research community. This interest spans various applications, with a particular focus on prosthetic control and rehabilitation, where achieving precise hand gesture recognition using surface electromyography signals is crucial due to the complexity and variability of surface electromyography data. Advanced signal processing and data analysis techniques are required to effectively extract meaningful information from these signals. In our study, we utilized three datasets: NinaPro Database 1, CapgMyo Database A, and CapgMyo Database B. These datasets were chosen for their open-source availability and established role in evaluating surface electromyography classifiers. Hand gesture recognition using surface electromyography signals draws inspiration from image classification algorithms, leading to the introduction and development of the Novel Signal Transformer. We systematically investigated two feature extraction techniques for surface electromyography signals: the Fast Fourier Transform and wavelet-based feature extraction. Our study demonstrated significant advancements in surface electromyography signal classification, particularly in the Ninapro database 1 and CapgMyo dataset A, surpassing existing results in the literature. The newly introduced Signal Transformer outperformed traditional Convolutional Neural Networks by excelling in capturing structural details and incorporating global information from image-like signals through robust basis functions. Additionally, the inclusion of an attention mechanism within the Signal Transformer highlighted the significance of electrode readings, improving classification accuracy. These findings underscore the potential of the Signal Transformer as a powerful tool for precise and effective surface electromyography signal classification, promising applications in prosthetic control and rehabilitation.


Asunto(s)
Gestos , Redes Neurales de la Computación , Electromiografía/métodos , Algoritmos , Procesamiento de Señales Asistido por Computador
10.
Sensors (Basel) ; 24(11)2024 May 25.
Artículo en Inglés | MEDLINE | ID: mdl-38894205

RESUMEN

By integrating sensing capability into wireless communication, wireless sensing technology has become a promising contactless and non-line-of-sight sensing paradigm to explore the dynamic characteristics of channel state information (CSI) for recognizing human behaviors. In this paper, we develop an effective device-free human gesture recognition (HGR) system based on WiFi wireless sensing technology in which the complementary CSI amplitude and phase of communication link are jointly exploited. To improve the quality of collected CSI, a linear transform-based data processing method is first used to eliminate the phase offset and noise and to reduce the impact of multi-path effects. Then, six different time and frequency domain features are chosen for both amplitude and phase, including the mean, variance, root mean square, interquartile range, energy entropy and power spectral entropy, and a feature selection algorithm to remove irrelevant and redundant features is proposed based on filtering and principal component analysis methods, resulting in the construction of a feature subspace to distinguish different gestures. On this basis, a support vector machine-based stacking algorithm is proposed for gesture classification based on the selected and complementary amplitude and phase features. Lastly, we conduct experiments under a practical scenario with one transmitter and receiver. The results demonstrate that the average accuracy of the proposed HGR system is 98.3% and that the F1-score is over 97%.

11.
Sensors (Basel) ; 24(11)2024 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-38894429

RESUMEN

Effective feature extraction and selection are crucial for the accurate classification and prediction of hand gestures based on electromyographic signals. In this paper, we systematically compare six filter and wrapper feature evaluation methods and investigate their respective impacts on the accuracy of gesture recognition. The investigation is based on several benchmark datasets and one real hand gesture dataset, including 15 hand force exercises collected from 14 healthy subjects using eight commercial sEMG sensors. A total of 37 time- and frequency-domain features were extracted from each sEMG channel. The benchmark dataset revealed that the minimum Redundancy Maximum Relevance (mRMR) feature evaluation method had the poorest performance, resulting in a decrease in classification accuracy. However, the RFE method demonstrated the potential to enhance classification accuracy across most of the datasets. It selected a feature subset comprising 65 features, which led to an accuracy of 97.14%. The Mutual Information (MI) method selected 200 features to reach an accuracy of 97.38%. The Feature Importance (FI) method reached a higher accuracy of 97.62% but selected 140 features. Further investigations have shown that selecting 65 and 75 features with the RFE methods led to an identical accuracy of 97.14%. A thorough examination of the selected features revealed the potential for three additional features from three specific sensors to enhance the classification accuracy to 97.38%. These results highlight the significance of employing an appropriate feature selection method to significantly reduce the number of necessary features while maintaining classification accuracy. They also underscore the necessity for further analysis and refinement to achieve optimal solutions.


Asunto(s)
Electromiografía , Gestos , Mano , Humanos , Electromiografía/métodos , Mano/fisiología , Algoritmos , Masculino , Adulto , Femenino , Procesamiento de Señales Asistido por Computador
12.
Sensors (Basel) ; 24(12)2024 Jun 19.
Artículo en Inglés | MEDLINE | ID: mdl-38931754

RESUMEN

Electromyography-based gesture recognition has become a challenging problem in the decoding of fine hand movements. Recent research has focused on improving the accuracy of gesture recognition by increasing the complexity of network models. However, training a complex model necessitates a significant amount of data, thereby escalating both user burden and computational costs. Moreover, owing to the considerable variability of surface electromyography (sEMG) signals across different users, conventional machine learning approaches reliant on a single feature fail to meet the demand for precise gesture recognition tailored to individual users. Therefore, to solve the problems of large computational cost and poor cross-user pattern recognition performance, we propose a feature selection method that combines mutual information, principal component analysis and the Pearson correlation coefficient (MPP). This method can filter out the optimal subset of features that match a specific user while combining with an SVM classifier to accurately and efficiently recognize the user's gesture movements. To validate the effectiveness of the above method, we designed an experiment including five gesture actions. The experimental results show that compared to the classification accuracy obtained using a single feature, we achieved an improvement of about 5% with the optimally selected feature as the input to any of the classifiers. This study provides an effective guarantee for user-specific fine hand movement decoding based on sEMG signals.


Asunto(s)
Electromiografía , Antebrazo , Gestos , Mano , Reconocimiento de Normas Patrones Automatizadas , Humanos , Electromiografía/métodos , Mano/fisiología , Antebrazo/fisiología , Reconocimiento de Normas Patrones Automatizadas/métodos , Masculino , Adulto , Análisis de Componente Principal , Femenino , Algoritmos , Movimiento/fisiología , Adulto Joven , Máquina de Vectores de Soporte , Aprendizaje Automático
13.
Sensors (Basel) ; 24(15)2024 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-39123896

RESUMEN

For successful human-robot collaboration, it is crucial to establish and sustain quality interaction between humans and robots, making it essential to facilitate human-robot interaction (HRI) effectively. The evolution of robot intelligence now enables robots to take a proactive role in initiating and sustaining HRI, thereby allowing humans to concentrate more on their primary tasks. In this paper, we introduce a system known as the Robot-Facilitated Interaction System (RFIS), where mobile robots are employed to perform identification, tracking, re-identification, and gesture recognition in an integrated framework to ensure anytime readiness for HRI. We implemented the RFIS on an autonomous mobile robot used for transporting a patient, to demonstrate proactive, real-time, and user-friendly interaction with a caretaker involved in monitoring and nursing the patient. In the implementation, we focused on the efficient and robust integration of various interaction facilitation modules within a real-time HRI system that operates in an edge computing environment. Experimental results show that the RFIS, as a comprehensive system integrating caretaker recognition, tracking, re-identification, and gesture recognition, can provide an overall high quality of interaction in HRI facilitation with average accuracies exceeding 90% during real-time operations at 5 FPS.


Asunto(s)
Gestos , Robótica , Robótica/métodos , Humanos , Reconocimiento de Normas Patrones Automatizadas/métodos , Algoritmos , Inteligencia Artificial
14.
Sensors (Basel) ; 24(3)2024 Jan 31.
Artículo en Inglés | MEDLINE | ID: mdl-38339637

RESUMEN

Surface electromyogram (sEMG)-based gesture recognition has emerged as a promising avenue for developing intelligent prostheses for upper limb amputees. However, the temporal variations in sEMG have rendered recognition models less efficient than anticipated. By using cross-session calibration and increasing the amount of training data, it is possible to reduce these variations. The impact of varying the amount of calibration and training data on gesture recognition performance for amputees is still unknown. To assess these effects, we present four datasets for the evaluation of calibration data and examine the impact of the amount of training data on benchmark performance. Two amputees who had undergone amputations years prior were recruited, and seven sessions of data were collected for analysis from each of them. Ninapro DB6, a publicly available database containing data from ten healthy subjects across ten sessions, was also included in this study. The experimental results show that the calibration data improved the average accuracy by 3.03%, 6.16%, and 9.73% for the two subjects and Ninapro DB6, respectively, compared to the baseline results. Moreover, it was discovered that increasing the number of training sessions was more effective in improving accuracy than increasing the number of trials. Three potential strategies are proposed in light of these findings to enhance cross-session models further. We consider these findings to be of the utmost importance for the commercialization of intelligent prostheses, as they demonstrate the criticality of gathering calibration and cross-session training data, while also offering effective strategies to maximize the utilization of the entire dataset.


Asunto(s)
Amputados , Miembros Artificiales , Humanos , Electromiografía/métodos , Calibración , Reconocimiento de Normas Patrones Automatizadas/métodos , Extremidad Superior , Algoritmos
15.
Sensors (Basel) ; 24(2)2024 Jan 17.
Artículo en Inglés | MEDLINE | ID: mdl-38257674

RESUMEN

During the COVID-19 pandemic, the number of cases continued to rise. As a result, there was a growing demand for alternative control methods to traditional buttons or touch screens. However, most current gesture recognition technologies rely on machine vision methods. However, this method can lead to suboptimal recognition results, especially in situations where the camera is operating in low-light conditions or encounters complex backgrounds. This study introduces an innovative gesture recognition system for large movements that uses a combination of millimeter wave radar and a thermal imager, where the multi-color conversion algorithm is used to improve palm recognition on the thermal imager together with deep learning approaches to improve its accuracy. While the user performs gestures, the mmWave radar captures point cloud information, which is then analyzed through neural network model inference. It also integrates thermal imaging and palm recognition to effectively track and monitor hand movements on the screen. The results suggest that this combined method significantly improves accuracy, reaching a rate of over 80%.


Asunto(s)
COVID-19 , Gestos , Humanos , Pandemias , Algoritmos , COVID-19/diagnóstico , Mano/diagnóstico por imagen
16.
Sensors (Basel) ; 24(5)2024 Feb 20.
Artículo en Inglés | MEDLINE | ID: mdl-38474890

RESUMEN

RF-based gesture recognition systems outperform computer vision-based systems in terms of user privacy. The integration of Wi-Fi sensing and deep learning has opened new application areas for intelligent multimedia technology. Although promising, existing systems have multiple limitations: (1) they only work well in a fixed domain; (2) when working in a new domain, they require the recollection of a large amount of data. These limitations either lead to a subpar cross-domain performance or require a huge amount of human effort, impeding their widespread adoption in practical scenarios. We propose Wi-AM, a privacy-preserving gesture recognition framework, to address the above limitations. Wi-AM can accurately recognize gestures in a new domain with only one sample. To remove irrelevant disturbances induced by interfering domain factors, we design a multi-domain adversarial scheme to reduce the differences in data distribution between different domains and extract the maximum amount of transferable features related to gestures. Moreover, to quickly adapt to an unseen domain with only a few samples, Wi-AM adopts a meta-learning framework to fine-tune the trained model into a new domain with a one-sample-per-gesture manner while achieving an accurate cross-domain performance. Extensive experiments in a real-world dataset demonstrate that Wi-AM can recognize gestures in an unseen domain with average accuracy of 82.13% and 86.76% for 1 and 3 data samples.


Asunto(s)
Gestos , Reconocimiento de Normas Patrones Automatizadas , Humanos , Reconocimiento en Psicología , Tecnología de la Información , Inteligencia , Algoritmos
17.
Sensors (Basel) ; 24(6)2024 Mar 08.
Artículo en Inglés | MEDLINE | ID: mdl-38544014

RESUMEN

This study investigates the characteristics of a novel origami-based, elastomeric actuator and a soft gripper, which are controlled by hand gestures that are recognized through machine learning algorithms. The lightweight paper-elastomer structure employed in this research exhibits distinct actuation features in four key areas: (1) It requires approximately 20% less pressure for the same bending amplitude compared to pneumatic network actuators (Pneu-Net) of equivalent weight, and even less pressure compared to other actuators with non-linear bending behavior; (2) The control of the device is examined by validating the relationship between pressure and the bending angle, as well as the interaction force and pressure at a fixed bending angle; (3) A soft robotic gripper comprising three actuators is designed. Enveloping and pinch grasping experiments are conducted on various shapes, which demonstrate the gripper's potential in handling a wide range of objects for numerous applications; and (4) A gesture recognition algorithm is developed to control the gripper using electromyogram (EMG) signals from the user's muscles.


Asunto(s)
Algoritmos , Elastómeros , Electromiografía , Gestos , Aprendizaje Automático
18.
Sensors (Basel) ; 24(6)2024 Mar 11.
Artículo en Inglés | MEDLINE | ID: mdl-38544062

RESUMEN

In order to improve the real-time performance of gesture recognition by a micro-Doppler map of mmWave radar, the point cloud based gesture recognition for mmWave radar is proposed in this paper. Two steps are carried out for mmWave radar-based gesture recognition. The first step is to estimate the point cloud of the gestures by 3D-FFT and the peak grouping. The second step is to train the TRANS-CNN model by combining the multi-head self-attention and the 1D-convolutional network so as to extract the features in the point cloud data at a deeper level to categorize the gestures. In the experiments, TI mmWave radar sensor IWR1642 is used as a benchmark to evaluate the feasibility of the proposed approach. The results show that the accuracy of the gesture recognition reaches 98.5%. In order to prove the effectiveness of our approach, a simply 2Tx2Rx radar sensor is developed in our lab, and the accuracy of recognition reaches 97.1%. The results show that our proposed gesture recognition approach achieves the best performance in real time with limited training data in comparison with the existing methods.

19.
Sensors (Basel) ; 24(3)2024 Jan 26.
Artículo en Inglés | MEDLINE | ID: mdl-38339542

RESUMEN

Japanese Sign Language (JSL) is vital for communication in Japan's deaf and hard-of-hearing community. But probably because of the large number of patterns, 46 types, there is a mixture of static and dynamic, and the dynamic ones have been excluded in most studies. Few researchers have been working to develop a dynamic JSL alphabet, and their performance accuracy is unsatisfactory. We proposed a dynamic JSL recognition system using effective feature extraction and feature selection approaches to overcome the challenges. In the procedure, we follow the hand pose estimation, effective feature extraction, and machine learning techniques. We collected a video dataset capturing JSL gestures through standard RGB cameras and employed MediaPipe for hand pose estimation. Four types of features were proposed. The significance of these features is that the same feature generation method can be used regardless of the number of frames or whether the features are dynamic or static. We employed a Random forest (RF) based feature selection approach to select the potential feature. Finally, we fed the reduced features into the kernels-based Support Vector Machine (SVM) algorithm classification. Evaluations conducted on our proprietary newly created dynamic Japanese sign language alphabet dataset and LSA64 dynamic dataset yielded recognition accuracies of 97.20% and 98.40%, respectively. This innovative approach not only addresses the complexities of JSL but also holds the potential to bridge communication gaps, offering effective communication for the deaf and hard-of-hearing, and has broader implications for sign language recognition systems globally.


Asunto(s)
Reconocimiento de Normas Patrones Automatizadas , Lengua de Signos , Humanos , Japón , Reconocimiento de Normas Patrones Automatizadas/métodos , Mano , Algoritmos , Gestos
20.
Sensors (Basel) ; 24(2)2024 Jan 06.
Artículo en Inglés | MEDLINE | ID: mdl-38257441

RESUMEN

Hand gesture recognition, which is one of the fields of human-computer interaction (HCI) research, extracts the user's pattern using sensors. Radio detection and ranging (RADAR) sensors are robust under severe environments and convenient to use for hand gestures. The existing studies mostly adopted continuous-wave (CW) radar, which only shows a good performance at a fixed distance, which is due to its limitation of not seeing the distance. This paper proposes a hand gesture recognition system that utilizes frequency-shift keying (FSK) radar, allowing for a recognition method that can work at the various distances between a radar sensor and a user. The proposed system adopts a convolutional neural network (CNN) model for the recognition. From the experimental results, the proposed recognition system covers the range from 30 cm to 180 cm and shows an accuracy of 93.67% over the entire range.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA