Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-35239484

RESUMEN

Most stroke survivors have difficulties completing activities of daily living (ADLs) independently. However, few rehabilitation systems have focused on ADLs-related training for gross and fine motor function together. We propose an ADLs-based serious game rehabilitation system for the training of motor function and coordination of both arm and hand movement where the user performs corresponding ADLs movements to interact with the target in the serious game. A multi-sensor fusion model based on electromyographic (EMG), force myographic (FMG), and inertial sensing was developed to estimate users' natural upper limb movement. Eight healthy subjects and three stroke patients were recruited in an experiment to validate the system's effectiveness. The performance of different sensor and classifier configurations on hand gesture classification against the arm position variations were analyzed, and qualitative patient questionnaires were conducted. Results showed that elbow extension/flexion has a more significant negative influence on EMG-based, FMG-based, and EMG+FMG-based hand gesture recognition than shoulder abduction/adduction does. In addition, there was no significant difference in the negative influence of shoulder abduction/adduction and shoulder flexion/extension on hand gesture recognition. However, there was a significant interaction between sensor configurations and algorithm configurations in both offline and real-time recognition accuracy. The EMG+FMG-combined multi-position classifier model had the best performance against arm position change. In addition, all the stroke patients reported their ADLs-related ability could be restored by using the system. These results demonstrate that the multi-sensor fusion model could estimate hand gestures and gross movement accurately, and the proposed training system has the potential to improve patients' ability to perform ADLs.


Asunto(s)
Rehabilitación de Accidente Cerebrovascular , Accidente Cerebrovascular , Actividades Cotidianas , Brazo , Electromiografía , Mano , Humanos , Movimiento , Rehabilitación de Accidente Cerebrovascular/métodos , Extremidad Superior
2.
J Neuroeng Rehabil ; 18(1): 37, 2021 02 17.
Artículo en Inglés | MEDLINE | ID: mdl-33596942

RESUMEN

BACKGROUND: The foot progression angle is an important measure used to help patients reduce their knee adduction moment. Current measurement systems are either lab-bounded or do not function in all environments (e.g., magnetically distorted). This work proposes a novel approach to estimate foot progression angle using a single foot-worn inertial sensor (accelerometer and gyroscope). METHODS: The approach uses a dynamic step frame that is recalculated for the stance phase of each step to calculate the foot trajectory relative to that frame, to minimize effects of drift and to eliminate the need for a magnetometer. The foot progression angle (FPA) is then calculated as the angle between walking direction and the dynamic step frame. This approach was validated by gait measurements with five subjects walking with three gait types (normal, toe-in and toe-out). RESULTS: The FPA was estimated with a maximum mean error of ~ 2.6° over all gait conditions. Additionally, the proposed inertial approach can significantly differentiate between the three different gait types. CONCLUSION: The proposed approach can effectively estimate differences in FPA without requiring a heading reference (magnetometer). This work enables feedback applications on FPA for patients with gait disorders that function in any environment, i.e. outside of a gait lab or in magnetically distorted environments.


Asunto(s)
Análisis de la Marcha/instrumentación , Dispositivos Electrónicos Vestibles , Acelerometría/instrumentación , Adulto , Fenómenos Biomecánicos , Pie/fisiopatología , Humanos , Masculino
3.
Sensors (Basel) ; 19(17)2019 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-31461958

RESUMEN

Full-body motion capture typically requires sensors/markers to be placed on each rigid body segment, which results in long setup times and is obtrusive. The number of sensors/markers can be reduced using deep learning or offline methods. However, this requires large training datasets and/or sufficient computational resources. Therefore, we investigate the following research question: "What is the performance of a shallow approach, compared to a deep learning one, for estimating time coherent full-body poses using only five inertial sensors?". We propose to incorporate past/future inertial sensor information into a stacked input vector, which is fed to a shallow neural network for estimating full-body poses. Shallow and deep learning approaches are compared using the same input vector configurations. Additionally, the inclusion of acceleration input is evaluated. The results show that a shallow learning approach can estimate full-body poses with a similar accuracy (~6 cm) to that of a deep learning approach (~7 cm). However, the jerk errors are smaller using the deep learning approach, which can be the effect of explicit recurrent modelling. Furthermore, it is shown that the delay using a shallow learning approach (72 ms) is smaller than that of a deep learning approach (117 ms).


Asunto(s)
Técnicas Biosensibles , Marcha/fisiología , Monitoreo Fisiológico/métodos , Movimiento/fisiología , Aceleración , Algoritmos , Cuerpo Humano , Humanos , Aprendizaje Automático , Redes Neurales de la Computación , Postura/fisiología
4.
Front Physiol ; 9: 218, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29623042

RESUMEN

Analysis of running mechanics has traditionally been limited to a gait laboratory using either force plates or an instrumented treadmill in combination with a full-body optical motion capture system. With the introduction of inertial motion capture systems, it becomes possible to measure kinematics in any environment. However, kinetic information could not be provided with such technology. Furthermore, numerous body-worn sensors are required for a full-body motion analysis. The aim of this study is to examine the validity of a method to estimate sagittal knee joint angles and vertical ground reaction forces during running using an ambulatory minimal body-worn sensor setup. Two concatenated artificial neural networks were trained (using data from eight healthy subjects) to estimate the kinematics and kinetics of the runners. The first artificial neural network maps the information (orientation and acceleration) of three inertial sensors (placed at the lower legs and pelvis) to lower-body joint angles. The estimated joint angles in combination with measured vertical accelerations are input to a second artificial neural network that estimates vertical ground reaction forces. To validate our approach, estimated joint angles were compared to both inertial and optical references, while kinetic output was compared to measured vertical ground reaction forces from an instrumented treadmill. Performance was evaluated using two scenarios: training and evaluating on a single subject and training on multiple subjects and evaluating on a different subject. The estimated kinematics and kinetics of most subjects show excellent agreement (ρ>0.99) with the reference, for single subject training. Knee flexion/extension angles are estimated with a mean RMSE <5°. Ground reaction forces are estimated with a mean RMSE < 0.27 BW. Additionaly, peak vertical ground reaction force, loading rate and maximal knee flexion during stance were compared, however, no significant differences were found. With multiple subject training the accuracy of estimating discrete and continuous outcomes decreases, however, good agreement (ρ > 0.9) is still achieved for seven of the eight different evaluated subjects. The performance of multiple subject learning depends on the diversity in the training dataset, as differences in accuracy were found for the different evaluated subjects.

5.
Sensors (Basel) ; 16(12)2016 Dec 15.
Artículo en Inglés | MEDLINE | ID: mdl-27983676

RESUMEN

Human movement analysis has become easier with the wide availability of motion capture systems. Inertial sensing has made it possible to capture human motion without external infrastructure, therefore allowing measurements in any environment. As high-quality motion capture data is available in large quantities, this creates possibilities to further simplify hardware setups, by use of data-driven methods to decrease the number of body-worn sensors. In this work, we contribute to this field by analyzing the capabilities of using either artificial neural networks (eager learning) or nearest neighbor search (lazy learning) for such a problem. Sparse orientation features, resulting from sensor fusion of only five inertial measurement units with magnetometers, are mapped to full-body poses. Both eager and lazy learning algorithms are shown to be capable of constructing this mapping. The full-body output poses are visually plausible with an average joint position error of approximately 7 cm, and average joint angle error of 7 ∘ . Additionally, the effects of magnetic disturbances typical in orientation tracking on the estimation of full-body poses was also investigated, where nearest neighbor search showed better performance for such disturbances.


Asunto(s)
Cuerpo Humano , Aprendizaje Automático , Monitoreo Fisiológico/instrumentación , Postura/fisiología , Adulto , Algoritmos , Fenómenos Biomecánicos , Femenino , Humanos , Articulaciones/fisiología , Masculino , Redes Neurales de la Computación , Orientación , Factores de Tiempo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA