Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-37983152

RESUMO

Hand gesture recognition (HGR) based on surface electromyogram (sEMG) and Accelerometer (ACC) signals is increasingly attractive where fusion strategies are crucial for performance and remain challenging. Currently, neural network-based fusion methods have gained superior performance. Nevertheless, these methods typically fuse sEMG and ACC either in the early or late stages, overlooking the integration of entire cross-modal hierarchical information within each individual hidden layer, thus inducing inefficient inter-modal fusion. To this end, we propose a novel Alignment-Enhanced Interactive Fusion (AiFusion) model, which achieves effective fusion via a progressive hierarchical fusion strategy. Notably, AiFusion can flexibly perform both complete and incomplete multimodal HGR. Specifically, AiFusion contains two unimodal branches and a cascaded transformer-based multimodal fusion branch. The fusion branch is first designed to adequately characterize modality-interactive knowledge by adaptively capturing inter-modal similarity and fusing hierarchical features from all branches layer by layer. Then, the modality-interactive knowledge is aligned with that of unimodality using cross-modal supervised contrastive learning and online distillation from embedding and probability spaces respectively. These alignments further promote fusion quality and refine modality-specific representations. Finally, the recognition outcomes are set to be determined by available modalities, thus contributing to handling the incomplete multimodal HGR problem, which is frequently encountered in real-world scenarios. Experimental results on five public datasets demonstrate that AiFusion outperforms most state-of-the-art benchmarks in complete multimodal HGR. Impressively, it also surpasses the unimodal baselines in the challenging incomplete multimodal HGR. The proposed AiFusion provides a promising solution to realize effective and robust multimodal HGR-based interfaces.


Assuntos
Benchmarking , Gestos , Humanos , Fontes de Energia Elétrica , Eletromiografia , Aprendizagem
2.
Front Neurosci ; 15: 645374, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33927589

RESUMO

Herein, we propose a real-time stable control gait switching method for the exoskeleton rehabilitation robot. Exoskeleton rehabilitation robots have been extensively developed during the past decade and are able to offer valuable motor ability to paraplegics. However, achieving stable states of the human-exoskeleton system while conserving wearer strength remains challenging. The constant switching of gaits during walking may affect the center of gravity, resulting in imbalance of human-exoskeleton system. In this study, it was determined that forming an equilateral triangle with two crutch-supporting points and a supporting leg has a positive impact on walking stability and ergonomic interaction. First, the gaits planning and stability analysis based on human kinematics model and zero moment point method for the lower limb exoskeleton are demonstrated. Second, a neural interface based on surface electromyography (sEMG), which realizes the intention recognition and muscle fatigue estimation, is constructed. Third, the stability of human-exoskeleton system and ergonomic effects are tested through different gaits with planned and unplanned gait switching strategy on the SIAT lower limb rehabilitation exoskeleton. The intention recognition based on long short-term memory (LSTM) model can achieve an accuracy of nearly 99%. The experimental results verified the feasibility and efficiency of the proposed gait switching method for enhancing stability and ergonomic effects of lower limb rehabilitation exoskeleton.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA