Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Pattern Anal Mach Intell ; 46(7): 5192-5208, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38354071

RESUMO

Previous work for video captioning aims to objectively describe the video content but the captions lack human interest and attractiveness, limiting its practical application scenarios. The intention of video title generation (video titling) is to produce attractive titles, but there is a lack of benchmarks. This work offers CREATE, the first large-scale Chinese shoRt vidEo retrievAl and Title gEneration dataset, to assist research and applications in video titling, video captioning, and video retrieval in Chinese. CREATE comprises a high-quality labeled 210 K dataset and two web-scale 3 M and 10 M pre-training datasets, covering 51 categories, 50K+ tags, 537K+ manually annotated titles and captions, and 10M+ short videos with original video information. This work presents ACTEr, a unique Attractiveness-Consensus-based Title Evaluation, to objectively evaluate the quality of video title generation. This metric measures the semantic correlation between the candidate (model-generated title) and references (manual-labeled titles) and introduces attractive consensus weights to assess the attractiveness and relevance of the video title. Accordingly, this work proposes a novel multi-modal ALignment WIth Generation model, ALWIG, as one strong baseline to aid future model development. With the help of a tag-driven video-text alignment module and a GPT-based generation module, this model achieves video titling, captioning, and retrieval simultaneously. We believe that the release of the CREATE dataset, ACTEr metric, and ALWIG model will encourage in-depth research on the analysis and creation of Chinese short videos.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...