Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-37256808

RESUMEN

Human motion prediction is challenging due to the complex spatiotemporal feature modeling. Among all methods, graph convolution networks (GCNs) are extensively utilized because of their superiority in explicit connection modeling. Within a GCN, the graph correlation adjacency matrix drives feature aggregation, and thus, is the key to extracting predictive motion features. State-of-the-art methods decompose the spatiotemporal correlation into spatial correlations for each frame and temporal correlations for each joint. Directly parameterizing these correlations introduces redundant parameters to represent common relations shared by all frames and all joints. Besides, the spatiotemporal graph adjacency matrix is the same for different motion samples, and thus, cannot reflect samplewise correspondence variances. To overcome these two bottlenecks, we propose dynamic spatiotemporal decompose GC (DSTD-GC), which only takes 28.6% parameters of the state-of-the-art GC. The key of DSTD-GC is constrained dynamic correlation modeling, which explicitly parameterizes the common static constraints as a spatial/temporal vanilla adjacency matrix shared by all frames/joints and dynamically extracts correspondence variances for each frame/joint with an adjustment modeling function. For each sample, the common constrained adjacency matrices are fixed to represent generic motion patterns, while the extracted variances complete the matrices with specific pattern adjustments. Meanwhile, we mathematically reformulate GCs on spatiotemporal graphs into a unified form and find that DSTD-GC relaxes certain constraints of other GC, which contributes to a better representation capability. Moreover, by combining DSTD-GC with prior knowledge like body connection and temporal context, we propose a powerful spatiotemporal GCN called DSTD-GCN. On the Human3.6M, Carnegie Mellon University (CMU) Mocap, and 3D Poses in the Wild (3DPW) datasets, DSTD-GCN outperforms state-of-the-art methods by 3.9%-8.7% in prediction accuracy with 55.0%-96.9% fewer parameters. Codes are available at https://github.com/Jaakk0F/DSTD-GCN.

2.
IEEE Trans Image Process ; 31: 3973-3986, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35648878

RESUMEN

Video-based human pose estimation (VHPE) is a vital yet challenging task. While deep learning algorithms have made tremendous progress for the VHPE, lots of these approaches to this task implicitly model the long-range interaction between joints by expanding the receptive field of the convolution or designing a graph manually. Unlike prior methods, we design a lightweight and plug-and-play joint relation extractor (JRE) to explicitly and automatically model the associative relationship between joints. The JRE takes the pseudo heatmaps of joints as input and calculates their similarity. In this way, the JRE can flexibly learn the correlation between any two joints, allowing it to learn the rich spatial configuration of human poses. Furthermore, the JRE can infer invisible joints according to the correlation between joints, which is beneficial for locating occluded joints. Then, combined with temporal semantic continuity modeling, we propose a Relation-based Pose Semantics Transfer Network (RPSTN) for video-based human pose estimation. Specifically, to capture the temporal dynamics of poses, the pose semantic information of the current frame is transferred to the next with a joint relation guided pose semantics propagator (JRPSP). The JRPSP can transfer the pose semantic features from the non-occluded frame to the occluded frame. The proposed RPSTN achieves state-of-the-art or competitive results on the video-based Penn Action, Sub-JHMDB, PoseTrack2018, and HiEve datasets. Moreover, the proposed JRE improves the performance of backbones on the image-based COCO2017 dataset. Code is available at https://github.com/YHDang/pose-estimation.


Asunto(s)
Algoritmos , Semántica , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA