Your browser doesn't support javascript.
loading
Transferable multi-modal fusion in knee angles and gait phases for their continuous prediction.
Guo, Zhenpeng; Zheng, Huixian; Wu, Hanrui; Zhang, Jia; Zhou, Guoxu; Long, Jinyi.
Afiliação
  • Guo Z; College of Information Science and Technology, Jinan University, Guangzhou 510632, People's Republic of China.
  • Zheng H; College of Information Science and Technology, Jinan University, Guangzhou 510632, People's Republic of China.
  • Wu H; College of Information Science and Technology, Jinan University, Guangzhou 510632, People's Republic of China.
  • Zhang J; College of Information Science and Technology, Jinan University, Guangzhou 510632, People's Republic of China.
  • Zhou G; School of automation, Guangdong University of Technology, Guangzhou 510006, People's Republic of China.
  • Long J; College of Information Science and Technology, Jinan University, Guangzhou 510632, People's Republic of China.
J Neural Eng ; 20(3)2023 05 24.
Article em En | MEDLINE | ID: mdl-37059084
ABSTRACT
Objective.The gait phase and joint angle are two essential and complementary components of kinematics during normal walking, whose accurate prediction is critical for lower-limb rehabilitation, such as controlling the exoskeleton robots. Multi-modal signals have been used to promote the prediction performance of the gait phase or joint angle separately, but it is still few reports to examine how these signals can be used to predict both simultaneously.Approach.To address this problem, we propose a new method named transferable multi-modal fusion (TMMF) to perform a continuous prediction of knee angles and corresponding gait phases by fusing multi-modal signals. Specifically, TMMF consists of a multi-modal signal fusion block, a time series feature extractor, a regressor, and a classifier. The multi-modal signal fusion block leverages the maximum mean discrepancy to reduce the distribution discrepancy across different modals in the latent space, achieving the goal of transferable multi-modal fusion. Subsequently, by using the long short-term memory-based network, we obtain the feature representation from time series data to predict the knee angles and gait phases simultaneously. To validate our proposal, we design an experimental paradigm with random walking and resting to collect data containing multi-modal biomedical signals from electromyography, gyroscopes, and virtual reality.Main results.Comprehensive experiments on our constructed dataset demonstrate the effectiveness of the proposed method. TMMF achieves a root mean square error of0.090±0.022s in knee angle prediction and a precision of83.7±7.7% in gait phase prediction.Significance.We demonstrate the feasibility and validity of using TMMF to predict lower-limb kinematics continuously from multi-modal biomedical signals. This proposed method represents application potential in predicting the motor intent of patients with different pathologies.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Extremidade Inferior / Marcha Tipo de estudo: Prognostic_studies / Risk_factors_studies Limite: Humans Idioma: En Revista: J Neural Eng Assunto da revista: NEUROLOGIA Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Extremidade Inferior / Marcha Tipo de estudo: Prognostic_studies / Risk_factors_studies Limite: Humans Idioma: En Revista: J Neural Eng Assunto da revista: NEUROLOGIA Ano de publicação: 2023 Tipo de documento: Article