Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
Sensors (Basel) ; 23(2)2023 Jan 12.
Artículo en Inglés | MEDLINE | ID: mdl-36679675

RESUMEN

The Azure Kinect DK is an RGB-D-camera popular in research and studies with humans. For good scientific practice, it is relevant that Azure Kinect yields consistent and reproducible results. We noticed the yielded results were inconsistent. Therefore, we examined 100 body tracking runs per processing mode provided by the Azure Kinect Body Tracking SDK on two different computers using a prerecorded video. We compared those runs with respect to spatiotemporal progression (spatial distribution of joint positions per processing mode and run), derived parameters (bone length), and differences between the computers. We found a previously undocumented converging behavior of joint positions at the start of the body tracking. Euclidean distances of joint positions varied clinically relevantly with up to 87 mm between runs for CUDA and TensorRT; CPU and DirectML had no differences on the same computer. Additionally, we found noticeable differences between two computers. Therefore, we recommend choosing the processing mode carefully, reporting the processing mode, and performing all analyses on the same computer to ensure reproducible results when using Azure Kinect and its body tracking in research. Consequently, results from previous studies with Azure Kinect should be reevaluated, and until then, their findings should be interpreted with caution.


Asunto(s)
Computadores , Humanos , Fenómenos Biomecánicos , Reproducibilidad de los Resultados
2.
Sensors (Basel) ; 23(14)2023 Jul 10.
Artículo en Inglés | MEDLINE | ID: mdl-37514573

RESUMEN

A "long short-term memory" (LSTM)-based human activity classifier is presented for skeleton data estimated in video frames. A strong feature engineering step precedes the deep neural network processing. The video was analyzed in short-time chunks created by a sliding window. A fixed number of video frames was selected for every chunk and human skeletons were estimated using dedicated software, such as OpenPose or HRNet. The skeleton data for a given window were collected, analyzed, and eventually corrected. A knowledge-aware feature extraction from the corrected skeletons was performed. A deep network model was trained and applied for two-person interaction classification. Three network architectures were developed-single-, double- and triple-channel LSTM networks-and were experimentally evaluated on the interaction subset of the "NTU RGB+D" data set. The most efficient model achieved an interaction classification accuracy of 96%. This performance was compared with the best reported solutions for this set, based on "adaptive graph convolutional networks" (AGCN) and "3D convolutional networks" (e.g., OpenConv3D). The sliding-window strategy was cross-validated on the "UT-Interaction" data set, containing long video clips with many changing interactions. We concluded that a two-step approach to skeleton-based human activity classification (a skeleton feature engineering step followed by a deep neural network model) represents a practical tradeoff between accuracy and computational complexity, due to an early correction of imperfect skeleton data and a knowledge-aware extraction of relational features from the skeletons.


Asunto(s)
Redes Neurales de la Computación , Programas Informáticos , Humanos , Esqueleto
3.
Sensors (Basel) ; 22(9)2022 Apr 20.
Artículo en Inglés | MEDLINE | ID: mdl-35590844

RESUMEN

Skeleton data, which is often used in the HCI field, is a data structure that can efficiently express human poses and gestures because it consists of 3D positions of joints. The advancement of RGB-D sensors, such as Kinect sensors, enabled the easy capture of skeleton data from depth or RGB images. However, when tracking a target with a single sensor, there is an occlusion problem causing the quality of invisible joints to be randomly degraded. As a result, multiple sensors should be used to reliably track a target in all directions over a wide range. In this paper, we proposed a new method for combining multiple inaccurate skeleton data sets obtained from multiple sensors that capture a target from different angles into a single accurate skeleton data. The proposed algorithm uses density-based spatial clustering of applications with noise (DBSCAN) to prevent noise-added inaccurate joint candidates from participating in the merging process. After merging with the inlier candidates, we used Kalman filter to denoise the tremble error of the joint's movement. We evaluated the proposed algorithm's performance using the best view as the ground truth. In addition, the results of different sizes for the DBSCAN searching area were analyzed. By applying the proposed algorithm, the joint position accuracy of the merged skeleton improved as the number of sensors increased. Furthermore, highest performance was shown when the searching area of DBSCAN was 10 cm.


Asunto(s)
Algoritmos , Sistema Musculoesquelético , Humanos , Movimiento , Esqueleto
4.
Sensors (Basel) ; 21(5)2021 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-33804411

RESUMEN

Joint estimation of the human body is suitable for many fields such as human-computer interaction, autonomous driving, video analysis and virtual reality. Although many depth-based researches have been classified and generalized in previous review or survey papers, the point cloud-based pose estimation of human body is still difficult due to the disorder and rotation invariance of the point cloud. In this review, we summarize the recent development on the point cloud-based pose estimation of the human body. The existing works are divided into three categories based on their working principles, including template-based method, feature-based method and machine learning-based method. Especially, the significant works are highlighted with a detailed introduction to analyze their characteristics and limitations. The widely used datasets in the field are summarized, and quantitative comparisons are provided for the representative methods. Moreover, this review helps further understand the pertinent applications in many frontier research directions. Finally, we conclude the challenges involved and problems to be solved in future researches.


Asunto(s)
Nube Computacional , Aprendizaje Automático , Computadores , Humanos
5.
Sensors (Basel) ; 17(11)2017 Nov 22.
Artículo en Inglés | MEDLINE | ID: mdl-29165396

RESUMEN

Climbing and descending stairs are demanding daily activities, and the monitoring of them may reveal the presence of musculoskeletal diseases at an early stage. A markerless system is needed to monitor such stair walking activity without mentally or physically disturbing the subject. Microsoft Kinect v2 has been used for gait monitoring, as it provides a markerless skeleton tracking function. However, few studies have used this device for stair walking monitoring, and the accuracy of its skeleton tracking function during stair walking has not been evaluated. Moreover, skeleton tracking is not likely to be suitable for estimating body joints during stair walking, as the form of the body is different from what it is when it walks on level surfaces. In this study, a new method of estimating the 3D position of the knee joint was devised that uses the depth data of Kinect v2. The accuracy of this method was compared with that of the skeleton tracking function of Kinect v2 by simultaneously measuring subjects with a 3D motion capture system. The depth data method was found to be more accurate than skeleton tracking. The mean error of the 3D Euclidian distance of the depth data method was 43.2 ± 27.5 mm, while that of the skeleton tracking was 50.4 ± 23.9 mm. This method indicates the possibility of stair walking monitoring for the early discovery of musculoskeletal diseases.


Asunto(s)
Articulación de la Rodilla , Fenómenos Biomecánicos , Marcha , Humanos , Rango del Movimiento Articular , Caminata
6.
Front Robot AI ; 10: 1108114, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36936408

RESUMEN

Introduction: Video-based clinical rating plays an important role in assessing dystonia and monitoring the effect of treatment in dyskinetic cerebral palsy (CP). However, evaluation by clinicians is time-consuming, and the quality of rating is dependent on experience. The aim of the current study is to provide a proof-of-concept for a machine learning approach to automatically assess scoring of dystonia using 2D stick figures extracted from videos. Model performance was compared to human performance. Methods: A total of 187 video sequences of 34 individuals with dyskinetic CP (8-23 years, all non-ambulatory) were filmed at rest during lying and supported sitting. Videos were scored by three raters according to the Dyskinesia Impairment Scale (DIS) for arm and leg dystonia (normalized scores ranging from 0-1). Coordinates in pixels of the left and right wrist, elbow, shoulder, hip, knee and ankle were extracted using DeepLabCut, an open source toolbox that builds on a pose estimation algorithm. Within a subset, tracking accuracy was assessed for a pretrained human model and for models trained with an increasing number of manually labeled frames. The mean absolute error (MAE) between DeepLabCut's prediction of the position of body points and manual labels was calculated. Subsequently, movement and position features were calculated from extracted body point coordinates. These features were fed into a Random Forest Regressor to train a model to predict the clinical scores. The model performance trained with data from one rater evaluated by MAEs (model-rater) was compared to inter-rater accuracy. Results: A tracking accuracy of 4.5 pixels (approximately 1.5 cm) could be achieved by adding 15-20 manually labeled frames per video. The MAEs for the trained models ranged from 0.21 ± 0.15 for arm dystonia to 0.14 ± 0.10 for leg dystonia (normalized DIS scores). The inter-rater MAEs were 0.21 ± 0.22 and 0.16 ± 0.20, respectively. Conclusion: This proof-of-concept study shows the potential of using stick figures extracted from common videos in a machine learning approach to automatically assess dystonia. Sufficient tracking accuracy can be reached by manually adding labels within 15-20 frames per video. With a relatively small data set, it is possible to train a model that can automatically assess dystonia with a performance comparable to human scoring.

7.
Artículo en Inglés | MEDLINE | ID: mdl-32478049

RESUMEN

Continuous monitoring of frail individuals for detecting dangerous situations during their daily living at home can be a powerful tool toward their inclusion in the society by allowing living independently while safely. To this goal we developed a pose recognition system tailored to disabled students living in college dorms and based on skeleton tracking through four Kinect One devices independently recording the inhabitant with different viewpoints, while preserving the individual's privacy. The system is intended to classify each data frame and provide the classification result to a further decision-making algorithm, which may trigger an alarm based on the classified pose and the location of the subject with respect to the furniture in the room. An extensive dataset was recorded on 12 individuals moving in a mockup room and undertaking four poses to be recognized: standing, sitting, lying down, and "dangerous sitting." The latter consists of the subject slumped in a chair with his/her head lying forward or backward as if unconscious. Each skeleton frame was labeled and represented using 10 discriminative features: three skeletal joint vertical coordinates and seven relative and absolute angles describing articular joint positions and body segment orientation. In order to classify the pose of the subject in each skeleton frame we built a two hidden layers multi-layer perceptron neural network with a "SoftMax" output layer, which we trained on the data from 10 of the 12 subjects (495,728 frames), with the data from the two remaining subjects representing the test set (106,802 frames). The system achieved very promising results, with an average accuracy of 83.9% (ranging 82.7 and 94.3% in each of the four classes). Our work proves the usefulness of human pose recognition based on machine learning in the field of safety monitoring in assisted living conditions.

8.
Physiotherapy ; 101(4): 389-93, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26050135

RESUMEN

OBJECTIVE: To test the reliability and validity of shoulder joint angle measurements from the Microsoft Kinect™ for virtual rehabilitation. DESIGN: Test-retest reliability and concurrent validity, feasibility study. SETTING: Motion analysis laboratory. PARTICIPANTS: A convenience sample of 10 healthy adults. METHODS: Shoulder joint angle was assessed in four static poses, two trials for each pose, using: (1) the Kinect; (2) a three-dimensional motion analysis system; and (3) a clinical goniometer. All poses were captured with the Kinect from the frontal view. The two poses of shoulder flexion were also captured with the Kinect from the sagittal view. MAIN OUTCOME MEASURES: Absolute and relative test-retest reliability of the Kinect for the measurement of shoulder angle was determined in each pose with intraclass correlation coefficients (ICCs), standard error of the measure and minimal detectable change. The 95% limits of agreement (LOA) between the Kinect and the standard methods for measuring shoulder angle were computed to determine concurrent validity. RESULTS: While the Kinect provided to be highly reliable (ICC 0.76-0.98) for measuring shoulder angle from the frontal view, the 95% LOA between the Kinect and the two measurement standards were greater than ±5° in all poses for both views. CONCLUSIONS: Before the Kinect is used to measure movements for virtual rehabilitation applications, it is imperative to understand its limitations in precision and accuracy for the measurement of specific joint motions.


Asunto(s)
Modalidades de Fisioterapia/normas , Articulación del Hombro/anatomía & histología , Fenómenos Biomecánicos , Estudios de Factibilidad , Femenino , Humanos , Masculino , Rango del Movimiento Articular , Reproducibilidad de los Resultados , Interfaz Usuario-Computador , Adulto Joven
9.
Disabil Rehabil Assist Technol ; 9(4): 344-52, 2014 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-23786360

RESUMEN

UNLABELLED: Games and their use in rehabilitation have formed a new and rapidly growing area of research. A critical hardware component of rehabilitation programs is the input device that measures the patients' movements. After Microsoft released Kinect, extensive research has been initiated on its applications as an input device for rehabilitation. However, since most of the works in this area rely on a qualitative determination of the joints' movements rather than an accurate quantitative one, detailed analysis of patients' movements is hindered. The aim of this article is to determine the accuracy of the Kinect's joint tracking. To fulfill this task, a model of upper body was fabricated. The displacements of the joint centers were estimated by Kinect at different positions and were then compared with the actual ones from measurement. Moreover, the dependency of Kinect's error on distance and joint type was measured and analyzed. IMPLICATIONS FOR REHABILITATION: It measures and reports the accuracy of a sensor that can be directly used for monitoring physical therapy exercises. Using this sensor facilitates remote rehabilitation.


Asunto(s)
Articulaciones/fisiología , Actividad Motora/fisiología , Modalidades de Fisioterapia/instrumentación , Rango del Movimiento Articular/fisiología , Extremidad Superior/fisiología , Juegos de Video , Humanos , Modelos Anatómicos , Reproducibilidad de los Resultados
10.
Artículo en Zh | WPRIM | ID: wpr-469148

RESUMEN

Objective In order to improve the accuracy and efficiency of the measurement of range of motion (ROM) of human lower limbs and simplify process of ROM measurement,an automatic measurement of ROM of human lower limbs based on Kinect technique was proposed and tested in this study.Methods Fifty examinee were randomly divided into 5 groups,namely groups a,b,c,d and e,respectively,each group had 10 members.Using the human skeleton tracking technology from Kinect,the positions of the examinee's lower limbs were captured and tracked by processing the depth data of lower limbs' key joints.Then the information of ROM of hip and knee was output on human-computer interaction interface in real-time.By comparison with traditional manual measurement results,the accuracy of automatic measuring method could be verified.Meanwhile,with the aid of speech recognition and output technology,the mode of warning information transfer and the way of subject switch were optimized.Results According to the method of Grubbs-test and t-test,the ROM values | t | from the subjects' hip abduction (t =0.57,P =0.597),hip adduction (t =0.52,P =0.621),hip anteflexion (t =1.01,P =0.371),hip postextension (t =0.12,P =0.902),hip external rotation (t =0.00,P =1.000),hip internal rotation (t =0.34,P =0.753),knee flexion and extension (t =1.12,P =0.280) all were under the threshold value t0.025 (4) =2.776 on the premise of a level of significance α =0.05,which indicated that there was no significant difference between measured results and expected values(P > 0.05).Conclusion The automatic measurement of ROM of lower limbs can be realized which can improve the measurement accuracy,simplify the measurement process and enhance the practicability of ROM of lower limbs measurement.

SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda