Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Mais filtros

Base de dados
Assunto principal
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Neural Netw ; 179: 106578, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39111158

RESUMO

Self-supervised contrastive learning draws on power representational models to acquire generic semantic features from unlabeled data, and the key to training such models lies in how accurately to track motion features. Previous video contrastive learning methods have extensively used spatially or temporally augmentation as similar instances, resulting in models that are more likely to learn static backgrounds than motion features. To alleviate the background shortcuts, in this paper, we propose a cross-view motion consistent (CVMC) self-supervised video inter-intra contrastive model to focus on the learning of local details and long-term temporal relationships. Specifically, we first extract the dynamic features of consecutive video snippets and then align these features based on multi-view motion consistency. Meanwhile, we compare the optimized dynamic features for instance comparison of different videos and local spatial fine-grained with temporal order in the same video, respectively. Ultimately, the joint optimization of spatio-temporal alignment and motion discrimination effectively fills the challenges of the missing components of instance recognition, spatial compactness, and temporal perception in self-supervised learning. Experimental results show that our proposed self-supervised model can effectively learn visual representation information and achieve highly competitive performance compared to other state-of-the-art methods in both action recognition and video retrieval tasks.


Assuntos
Gravação em Vídeo , Humanos , Redes Neurais de Computação , Percepção de Movimento/fisiologia , Aprendizado de Máquina Supervisionado , Movimento (Física) , Algoritmos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA