RESUMO
The motion intent recognition via lower limb prosthesis can be regarded as a kind of short-term action recognition, where the major issue is to explore the gait instantaneous conversion (known as transitional pattern) between each two adjacent different steady states of gait mode. Traditional intent recognition methods usually employ a set of statistical features to classify the transitional patterns. However, the statistical features of the short-term signals via the instantaneous conversion are empirically unstable, which may degrade the classification accuracy. Bearing this in mind, we introduce the one-dimensional dual-tree complex wavelet transform (1D-DTCWT) to address the motion intent recognition via lower limb prosthesis. On the one hand, the local analysis ability of the wavelet transform can amplify the instantaneous variation characteristics of gait information, making the extracted features of instantaneous pattern between two adjacent different steady states more stable. On the other hand, the translation invariance and direction selectivity of 1D-DTCWT can help to explore the continuous features of patterns, which better reflects the inherent continuity of human lower limb movements. In the experiments, we have recruited ten able-bodied subjects and one amputee subject and collected data by performing five steady states and eight transitional states. The experimental results show that the recognition accuracy of the able-bodied subjects has reached 98.91%, 98.92%, and 97.27% for the steady states, transitional states, and total motion states, respectively. Furthermore, the accuracy of the amputee has reached 100%, 91.16%, and 90.27% for the steady states, transitional states, and total motion states, respectively. The above evidence finally indicates that the proposed method can better explore the gait instantaneous conversion (better expressed as motion intent) between each two adjacent different steady states compared with the state-of-the-art.
Assuntos
Amputados , Membros Artificiais , Algoritmos , Humanos , Movimento (Física) , Análise de OndaletasRESUMO
Powered intelligent lower limb prosthesis can actuate the knee and ankle joints, allowing transfemoral amputees to perform seamless transitions between locomotion states with the help of an intent recognition system. However, prior intent recognition studies often installed multiple sensors on the prosthesis, and they employed machine learning techniques to analyze time-series data with empirical features. We alternatively propose a novel method for training an intent recognition system that provides natural transitions between level walk, stair ascent / descent, and ramp ascent / descent. Since the transition between two neighboring states is driven by motion intent, we aim to explore the mapping between the motion state of a healthy leg and an amputee's motion intent before the upcoming transition of the prosthesis. We use inertial measurement units (IMUs) and put them on the healthy leg of lower limb amputees for monitoring its locomotion state. We analyze IMU data within the early swing phase of the healthy leg, and feed data into a convolutional neural network (CNN) to learn the feature mapping without expert participation. The proposed method can predict the motion intent of both unilateral amputees and the able-bodied, and help to adaptively calibrate the control strategy for actuating powered intelligent prosthesis in advance. The experimental results show that the recognition accuracy can reach a high level (94.15% for the able-bodied, 89.23% for amputees) on 13 classes of motion intent, containing five steady states on different terrains as well as eight transitional states among the steady states.