Temporal Representation Learning on Monocular Videos for 3D Human Pose Estimation.
IEEE Trans Pattern Anal Mach Intell
; 45(5): 6415-6427, 2023 05.
Article
en En
| MEDLINE
| ID: mdl-36251908
In this article we propose an unsupervised feature extraction method to capture temporal information on monocular videos, where we detect and encode subject of interest in each frame and leverage contrastive self-supervised (CSS) learning to extract rich latent vectors. Instead of simply treating the latent features of nearby frames as positive pairs and those of temporally-distant ones as negative pairs as in other CSS approaches, we explicitly disentangle each latent vector into a time-variant component and a time-invariant one. We then show that applying contrastive loss only to the time-variant features and encouraging a gradual transition on them between nearby and away frames while also reconstructing the input, extract rich temporal features, well-suited for human pose estimation. Our approach reduces error by about 50% compared to the standard CSS strategies, outperforms other unsupervised single-view methods and matches the performance of multi-view techniques. When 2D pose is available, our approach can extract even richer latent features and improve the 3D pose estimation accuracy, outperforming other state-of-the-art weakly supervised methods.
Texto completo:
1
Colección:
01-internacional
Base de datos:
MEDLINE
Asunto principal:
Algoritmos
/
Aprendizaje
Límite:
Humans
Idioma:
En
Revista:
IEEE Trans Pattern Anal Mach Intell
Asunto de la revista:
INFORMATICA MEDICA
Año:
2023
Tipo del documento:
Article
Pais de publicación:
Estados Unidos