Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Biomed Inform ; 147: 104524, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37838288

RESUMO

Accurate gait detection is crucial in utilizing the ample health information embedded in it. Vision-based approaches for gait detection have emerged as an alternative to the exacting sensor-based approaches, but their application has been rather limited due to complicated feature engineering processes and heavy reliance on lateral views. Thus, this study aimed to find a simple vision-based approach that is view-independent and accurate. A total of 22 participants performed six different actions representing standard and peculiar gaits, and the videos acquired from these actions were used as the input of the deep learning networks. Four networks, including a 2D convolutional neural network and an attention-based deep learning network, were trained with standard gaits, and their detection performance for both standard and peculiar gaits was assessed using measures including F1-scores. While all networks achieved remarkable detection performance, the CNN-Transformer network achieved the best performance for both standard and peculiar gaits. Little deviation by the speed of actions or view angles was found. The study is expected to contribute to the wider application of vision-based approaches in gait detection and gait-based health monitoring both at home and in clinical settings.


Assuntos
Marcha , Redes Neurais de Computação , Humanos
2.
Sensors (Basel) ; 22(21)2022 Oct 27.
Artigo em Inglês | MEDLINE | ID: mdl-36365930

RESUMO

Elderly gait is a source of rich information about their physical and mental health condition. As an alternative to the multiple sensors on the lower body parts, a single sensor on the pelvis has a positional advantage and an abundance of information acquirable. This study aimed to improve the accuracy of gait event detection in the elderly using a single sensor on the waist and deep learning models. Data were gathered from elderly subjects equipped with three IMU sensors while they walked. The input taken only from the waist sensor was used to train 16 deep-learning models including a CNN, RNN, and CNN-RNN hybrid with or without the Bidirectional and Attention mechanism. The groundtruth was extracted from foot IMU sensors. A fairly high accuracy of 99.73% and 93.89% was achieved by the CNN-BiGRU-Att model at the tolerance window of ±6 TS (±6 ms) and ±1 TS (±1 ms), respectively. Advancing from the previous studies exploring gait event detection, the model demonstrated a great improvement in terms of its prediction error having an MAE of 6.239 ms and 5.24 ms for HS and TO events, respectively, at the tolerance window of ±1 TS. The results demonstrated that the use of CNN-RNN hybrid models with Attention and Bidirectional mechanisms is promising for accurate gait event detection using a single waist sensor. The study can contribute to reducing the burden of gait detection and increase its applicability in future wearable devices that can be used for remote health monitoring (RHM) or diagnosis based thereon.


Assuntos
Aprendizado Profundo , Dispositivos Eletrônicos Vestíveis , Humanos , Idoso , Algoritmos , Marcha ,
3.
Artigo em Inglês | MEDLINE | ID: mdl-38083216

RESUMO

Vision-based gait analysis can play an important role in the remote and continuous monitoring of the elderly's health conditions. However, most vision-based approaches compute gait spatiotemporal parameters using human pose information and provide average parameters. This study aimed to propose a straightforward method for stride-by-stride gait spatiotemporal parameters estimation. A total of 160 elderly individuals participated in this study. Data were gathered with a GAITRite system and a mobile camera simultaneously. Three deep learning networks were trained with a few RGB frames as input and a continuous 1D signal containing both spatial and temporal gait parameters as output. The trained networks estimated the stride lengths with correlations of 0.938 and more and detected gait events with F1-scores of 0.914 and more.Clinical relevance- The proposed method showed excellent agreements with the GAITRite system in analyzing spatiotemporal gait parameters. Our approach can be applied to monitor the elderly's health conditions based on their gait parameters for early diagnosis of diseases, proper treatment, and timely intervention.


Assuntos
Análise da Marcha , Marcha , Humanos , Idoso
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 177-181, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086538

RESUMO

The joint angular velocity during daily life exercises is an important clinical outcome for injury risk index, rehabilitation progress monitoring and athlete's performance evaluation. Recently, wearable sensors have been widely used to monitor lower limb kinematics. However, these sensors are difficult and inconvenient to use in daily life. To mitigate these limitations, this study proposes a vision-based system for estimating lower limb joint kinematics using a deep convolution neural network with bi-directional long-short term memory and gated recurrent unit network. The normalized correlation coefficient, and the mean absolute error were computed between the ground truth obtained from the optical motion capture system and estimated joint angular velocities using proposed models. The estimated results show a highest correlation 0.93 in squat and 0.92 in walking on treadmill action. Furthermore, independent model for each joint angular velocity at the hip, knee, and ankle were analyzed and compared. Among the three joint angular velocities, knee joint has a best estimated accuracy (0.96 in squat and 0.96 in walking on the treadmill). The proposed models show higher estimation accuracy under both the lateral and the frontal view regardless of the camera positions and angles. This study proves the applicability of using sensor free vision-based system to monitor the lower limb kinematics during home workouts for healthcare and rehabilitation.


Assuntos
Aprendizado Profundo , Fenômenos Biomecânicos , Humanos , Articulação do Joelho , Extremidade Inferior , Caminhada
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 2703-2707, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36085943

RESUMO

Vision-based human joint angle estimation is essential for remote and continuous health monitoring. Most vision-based angle estimation methods use the locations of human joints extracted using optical motion cameras, depth cameras, or human pose estimation models. This study aimed to propose a reliable and straightforward approach with deep learning networks for knee and elbow flexion/extension angle estimation from the RGB video. Fifteen healthy participants performed four daily activities in this study. The experiments were conducted with four different deep learning networks, and the networks took nine subsequent frames as input while output was knee and elbow joint angles extracted from an optical motion capture system for each frame. The BiLSTM network-based joint angles estimator can estimate both joint angles with a correlation of 0.955 for knee and 0.917 for elbow joints regardless of the camera view angles.


Assuntos
Aprendizado Profundo , Articulação do Cotovelo , Cotovelo , Humanos , Articulação do Joelho
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 1936-1941, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34891666

RESUMO

Accurate gait events detection from the video would be a challenging problem. However, most vision-based methods for gait event detection highly rely on gait features that are estimated using gait silhouettes and human pose information for accurate gait data acquisition. This paper presented an accurate, multi-view approach with deep convolutional neural networks for efficient and practical gait event detection without requiring additional gait feature engineering. Especially, we aimed to detect gait events from frontal views as well as lateral views. We conducted the experiments with four different deep CNN models on our own dataset that includes three different walking actions from 11 healthy participants. Models took 9 subsequence frames stacking together as inputs, while outputs of models were probability vectors of gait events: toe-off and heel-strike for each frame. The deep CNN models trained only with video frames enabled to detect gait events with 93% or higher accuracy while the user is walking straight and walking around on both frontal and lateral views.


Assuntos
Marcha , Caminhada , Calcanhar , Humanos , Redes Neurais de Computação , Probabilidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA