Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
IEEE J Biomed Health Inform ; 27(11): 5345-5356, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37665702

RESUMO

Reconstructing and predicting 3D human walking poses in unconstrained measurement environments have the potential to use for health monitoring systems for people with movement disabilities by assessing progression after treatments and providing information for assistive device controls. The latest pose estimation algorithms utilize motion capture systems, which capture data from IMU sensors and third-person view cameras. However, third-person views are not always possible for outpatients alone. Thus, we propose the wearable motion capture problem of reconstructing and predicting 3D human poses from the wearable IMU sensors and wearable cameras, which aids clinicians' diagnoses on patients out of clinics. To solve this problem, we introduce a novel Attention-Oriented Recurrent Neural Network (AttRNet) that contains a sensor-wise attention-oriented recurrent encoder, a reconstruction module, and a dynamic temporal attention-oriented recurrent decoder, to reconstruct the 3D human pose over time and predict the 3D human poses at the following time steps. To evaluate our approach, we collected a new WearableMotionCapture dataset using wearable IMUs and wearable video cameras, along with the musculoskeletal joint angle ground truth. The proposed AttRNet shows high accuracy on the new lower-limb WearableMotionCapture dataset, and it also outperforms the state-of-the-art methods on two public full-body pose datasets: DIP-IMU and TotalCaputre.


Assuntos
Captura de Movimento , Dispositivos Eletrônicos Vestíveis , Humanos , Movimento , Redes Neurais de Computação , Monitorização Fisiológica , Movimento (Física) , Fenômenos Biomecânicos
2.
IEEE J Biomed Health Inform ; 27(6): 2829-2840, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37030855

RESUMO

Human kinetics, specifically joint moments and ground reaction forces (GRFs) can provide important clinical information and can be used to control assistive devices. Traditionally, collection of kinetics is mostly limited to the lab environment because it relies on data that are measured from a motion capture system and floor-embedded force plates to calculate the dynamics via musculoskeletal models. This spatially limited method makes it extremely challenging to measure kinetics outside the laboratory in a variety of walking conditions due to the expensive device setup and large space required. Recently, employing machine learning with IMU sensors are suggested as an alternative method for biomechanical analyses. Although these methods enable estimating human kinetic data outside the laboratory by linking IMU sensor data with kinetics dataset, they are limited to show inaccurate kinetic estimates even in highly repeatable single walking conditions due to the employment of generic deep learning algorithms. Thus, this paper proposes a novel deep learning model, Kinetics-FM-DLR-Ensemble-Net for single limb prediction of hip, knee, and ankle joint moments and 3-dimensional GRFs using three IMU sensors on the thigh, shank, and foot under several representatives walking conditions in daily living, such as treadmill, level-ground, stair, and ramp. This is the first study that implements both joint moments and GRFs in multiple walking conditions using IMU sensors via deep learning. Our deep learning model is versatile and accurate for identifying human kinetics across diverse subjects and walking conditions and outperforms state-of-the-art deep learning model for kinetics estimation by a large margin.


Assuntos
Aprendizado Profundo , Humanos , Fenômenos Biomecânicos , Caminhada , Extremidade Inferior , Articulação do Joelho , Marcha
3.
IEEE J Biomed Health Inform ; 26(8): 3906-3917, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35385394

RESUMO

Measurement of human body movement is an essential step in biomechanical analysis. The current standard for human motion capture systems uses infrared cameras to track reflective markers placed on a subject. While these systems can accurately track joint kinematics, the analyses are spatially limited to the lab environment. Though Inertial Measurement Units (IMUs) can eliminate these spatial limitations, those systems are impractical for use in daily living due to the need for many sensors, typically one per body segment. Due to the need for practical and accurate estimation of joint kinematics, this study implements a reduced number of IMU sensors and employs a machine learning algorithm to map sensor data to joint angles. Our developed algorithm estimates hip, knee, and ankle angles in the sagittal plane using two shoe-mounted IMU sensors in different practical walking conditions: treadmill, overground, stair, and slope conditions. Specifically, we propose five deep learning networks that use combinations of Convolutional Neural Networks (CNN) and Gated Recurrent Unit (GRU) based Recurrent Neural Networks (RNN) as base learners for our framework. Using those five baseline models, we propose a novel framework, DeepBBWAE-Net, that implements ensemble techniques such as bagging, boosting, and weighted averaging to improve kinematic predictions. DeepBBWAE-Net predicts joint kinematics for the three joint angles for each of the walking conditions with a Root Mean Square Error (RMSE) 6.93-29.0% lower than the base models individually. This is the first study that uses a reduced number of IMU sensors to estimate kinematics in multiple walking environments.


Assuntos
Redes Neurais de Computação , Sapatos , Articulação do Tornozelo , Fenômenos Biomecânicos , Marcha , Humanos , Extremidade Inferior
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA