Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
J Sports Sci Med ; 23(1): 515-525, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39228769

RESUMO

OpenPose-based motion analysis (OpenPose-MA), utilizing deep learning methods, has emerged as a compelling technique for estimating human motion. It addresses the drawbacks associated with conventional three-dimensional motion analysis (3D-MA) and human visual detection-based motion analysis (Human-MA), including costly equipment, time-consuming analysis, and restricted experimental settings. This study aims to assess the precision of OpenPose-MA in comparison to Human-MA, using 3D-MA as the reference standard. The study involved a cohort of 21 young and healthy adults. OpenPose-MA employed the OpenPose algorithm, a deep learning-based open-source two-dimensional (2D) pose estimation method. Human-MA was conducted by a skilled physiotherapist. The knee valgus angle during a drop vertical jump task was computed by OpenPose-MA and Human-MA using the same frontal-plane video image, with 3D-MA serving as the reference standard. Various metrics were utilized to assess the reproducibility, accuracy and similarity of the knee valgus angle between the different methods, including the intraclass correlation coefficient (ICC) (1, 3), mean absolute error (MAE), coefficient of multiple correlation (CMC) for waveform pattern similarity, and Pearson's correlation coefficients (OpenPose-MA vs. 3D-MA, Human-MA vs. 3D-MA). Unpaired t-tests were conducted to compare MAEs and CMCs between OpenPose-MA and Human-MA. The ICCs (1,3) for OpenPose-MA, Human-MA, and 3D-MA demonstrated excellent reproducibility in the DVJ trial. No significant difference between OpenPose-MA and Human-MA was observed in terms of the MAEs (OpenPose: 2.4° [95%CI: 1.9-3.0°], Human: 3.2° [95%CI: 2.1-4.4°]) or CMCs (OpenPose: 0.83 [range: 0.99-0.53], Human: 0.87 [range: 0.24-0.98]) of knee valgus angles. The Pearson's correlation coefficients of OpenPose-MA and Human-MA relative to that of 3D-MA were 0.97 and 0.98, respectively. This study demonstrated that OpenPose-MA achieved satisfactory reproducibility, accuracy and exhibited waveform similarity comparable to 3D-MA, similar to Human-MA. Both OpenPose-MA and Human-MA showed a strong correlation with 3D-MA in terms of knee valgus angle excursion.


Assuntos
Aprendizado Profundo , Humanos , Reprodutibilidade dos Testes , Adulto Jovem , Masculino , Feminino , Fenômenos Biomecânicos , Articulação do Joelho/fisiologia , Gravação em Vídeo , Adulto , Estudos de Tempo e Movimento , Algoritmos , Teste de Esforço/métodos , Exercício Pliométrico , Amplitude de Movimento Articular/fisiologia , Imageamento Tridimensional
2.
Phys Ther Res ; 27(1): 35-41, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38690532

RESUMO

OBJECTIVE: Assessment of the vertical ground reaction force (VGRF) during landing tasks is crucial for physical therapy in sports. The purpose of this study was to determine whether the VGRF during a single-leg landing can be estimated from a two-dimensional (2D) video image and pose estimation artificial intelligence (AI). METHODS: Eighteen healthy male participants (age: 23.0 ± 1.6 years) performed a single-leg landing task from a 30-cm height. The VGRF was measured using a force plate and estimated using center of mass (COM) position data from a 2D video image with pose estimation AI (2D-AI) and three-dimensional optical motion capture (3D-Mocap). The measured and estimated peak VGRFs were compared using a paired t-test and Pearson's correlation coefficient. The absolute errors of the peak VGRF were also compared between the two estimations. RESULTS: No significant difference in the peak VGRF was found between the force plate measured VGRF and the 2D-AI or 3D-Mocap estimated VGRF (force plate: 3.37 ± 0.42 body weight [BW], 2D-AI: 3.32 ± 0.42 BW, 3D-Mocap: 3.50 ± 0.42 BW). There was no significant difference in the absolute error of the peak VGRF between the 2D-AI and 3D-Mocap estimations (2D-AI: 0.20 ± 0.16 BW, 3D-Mocap: 0.13 ± 0.09 BW, P = 0.163). The measured peak VGRF was significantly correlated with the estimated peak by 2D-AI (R = 0.835, P <0.001). CONCLUSION: The results of this study indicate that peak VGRF estimation using 2D video images and pose estimation AI is useful for the clinical assessment of single-leg landing.

3.
Sensors (Basel) ; 23(24)2023 Dec 13.
Artigo em Inglês | MEDLINE | ID: mdl-38139644

RESUMO

Accuracy validation of gait analysis using pose estimation with artificial intelligence (AI) remains inadequate, particularly in objective assessments of absolute error and similarity of waveform patterns. This study aimed to clarify objective measures for absolute error and waveform pattern similarity in gait analysis using pose estimation AI (OpenPose). Additionally, we investigated the feasibility of simultaneous measuring both lower limbs using a single camera from one side. We compared motion analysis data from pose estimation AI using video footage that was synchronized with a three-dimensional motion analysis device. The comparisons involved mean absolute error (MAE) and the coefficient of multiple correlation (CMC) to compare the waveform pattern similarity. The MAE ranged from 2.3 to 3.1° on the camera side and from 3.1 to 4.1° on the opposite side, with slightly higher accuracy on the camera side. Moreover, the CMC ranged from 0.936 to 0.994 on the camera side and from 0.890 to 0.988 on the opposite side, indicating a "very good to excellent" waveform similarity. Gait analysis using a single camera revealed that the precision on both sides was sufficiently robust for clinical evaluation, while measurement accuracy was slightly superior on the camera side.


Assuntos
Inteligência Artificial , Análise da Marcha , Fenômenos Biomecânicos , Extremidade Inferior , Movimento (Física) , Marcha
4.
Comput Biol Med ; 141: 105164, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34971980

RESUMO

AIM: The purpose of this study was to automatically extract myocardial regions from transaxial single-photon emission computed tomography (SPECT) images using deep learning to reduce the effects of extracardiac activity, which has been problematic in cardiac nuclear imaging. METHOD: Myocardial region extraction was performed using two deep neural network architectures, U-Net and U-Net ++, and 694 myocardial SPECT images manually labeled with myocardial regions were used as the training data. In addition, a multi-slice input method was introduced during the learning session while taking the relationships to adjacent slices into account. Accuracy was assessed using Dice coefficients at both the slice and pixel levels, and the most effective number of input slices was determined. RESULTS: The Dice coefficient was 0.918 at the pixel level, and there were no false positives at the slice level using U-Net++ with 9 input slices. CONCLUSION: The proposed system based on U-Net++ with multi-slice input provided highly accurate myocardial region extraction and reduced the effects of extracardiac activity in myocardial SPECT images.


Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Perfusão , Tomografia Computadorizada de Emissão de Fóton Único
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA