Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
IEEE Trans Pattern Anal Mach Intell ; 46(6): 4174-4187, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38236680

RESUMEN

Query-oriented micro-video summarization task aims to generate a concise sentence with two properties: (a) summarizing the main semantic of the micro-video and (b) being expressed in the form of search queries to facilitate retrieval. Despite its enormous application value in the retrieval area, this direction has barely been explored. Previous studies of summarization mostly focus on the content summarization for traditional long videos. Directly applying these studies is prone to gain unsatisfactory results because of the unique features of micro-videos and queries: diverse entities and complex scenes within a short time, semantic gaps between modalities, and various queries in distinct expressions. To specifically adapt to these characteristics, we propose a query-oriented micro-video summarization model, dubbed QMS. It employs an encoder-decoder-based transformer architecture as the skeleton. The multi-modal (visual and textual) signals are passed through two modal-specific encoders to obtain their representations, followed by an entity-aware representation learning module to identify and highlight critical entity information. As to the optimization, regarding the large semantic gaps between modalities, we assign different confidence scores according to their semantic relevance in the optimization process. Additionally, we develop a novel strategy to sample the effective target query among the diverse query set with various expressions. Extensive experiments demonstrate the superiority of the QMS scheme, on both the summarization and retrieval tasks, over several state-of-the-art methods.

2.
IEEE Trans Image Process ; 32: 5794-5807, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37843991

RESUMEN

Talking face generation is the process of synthesizing a lip-synchronized video when given a reference portrait and an audio clip. However, generating a fine-grained talking video is nontrivial due to several challenges: 1) capturing vivid facial expressions, such as muscle movements; 2) ensuring smooth transitions between consecutive frames; and 3) preserving the details of the reference portrait. Existing efforts have only focused on modeling rigid lip movements, resulting in low-fidelity videos with jerky facial muscle deformations. To address these challenges, we propose a novel Fine-gRained mOtioN moDel (FROND), consisting of three components. In the first component, we adopt a two-stream encoder to capture local facial movement keypoints and embed their overall motion context as the global code. In the second component, we design a motion estimation module to predict audio-driven movements. This enables the learning of local key point motion in the continuous trajectory space to achieve smooth temporal facial movements. Additionally, the local and global motions are fused to estimate a continuous dense motion field, resulting in spatially smooth movements. In the third component, we devise a novel implicit image decoder based on an implicit neural network. This decoder recovers high-frequency information from the input image, resulting in a high-fidelity talking face. In summary, the FROND refines the motion trajectories of facial keypoints into a continuous dense motion field, which is followed by a decoder that fully exploits the inherent smoothness of the motion. We conduct quantitative and qualitative model evaluations on benchmark datasets. The experimental results show that our proposed FROND significantly outperforms several state-of-the-art baselines.

3.
IEEE Trans Image Process ; 32: 5537-5549, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37773902

RESUMEN

Visual Question Answering (VQA) is fundamentally compositional in nature, and many questions are simply answered by decomposing them into modular sub-problems. The recent proposed Neural Module Network (NMN) employ this strategy to question answering, whereas heavily rest with off-the-shelf layout parser or additional expert policy regarding the network architecture design instead of learning from the data. These strategies result in the unsatisfactory adaptability to the semantically-complicated variance of the inputs, thereby hindering the representational capacity and generalizability of the model. To tackle this problem, we propose a Semantic-aware modUlar caPsulE Routing framework, termed as SUPER, to better capture the instance-specific vision-semantic characteristics and refine the discriminative representations for prediction. Particularly, five powerful specialized modules as well as dynamic routers are tailored in each layer of the SUPER network, and the compact routing spaces are constructed such that a variety of customizable routes can be sufficiently exploited and the vision-semantic representations can be explicitly calibrated. We comparatively justify the effectiveness and generalization ability of our proposed SUPER scheme over five benchmark datasets, as well as the parametric-efficient advantage. It is worth emphasizing that this work is not to pursue the state-of-the-art results in VQA. Instead, we expect that our model is responsible to provide a novel perspective towards architecture learning and representation calibration for VQA.

4.
IEEE Trans Image Process ; 32: 3836-3846, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37410654

RESUMEN

Visual Commonsense Reasoning (VCR), deemed as one challenging extension of Visual Question Answering (VQA), endeavors to pursue a higher-level visual comprehension. VCR includes two complementary processes: question answering over a given image and rationale inference for answering explanation. Over the years, a variety of VCR methods have pushed more advancements on the benchmark dataset. Despite significance of these methods, they often treat the two processes in a separate manner and hence decompose VCR into two irrelevant VQA instances. As a result, the pivotal connection between question answering and rationale inference is broken, rendering existing efforts less faithful to visual reasoning. To empirically study this issue, we perform some in-depth empirical explorations in terms of both language shortcuts and generalization capability. Based on our findings, we then propose a plug-and-play knowledge distillation enhanced framework to couple the question answering and rationale inference processes. The key contribution lies in the introduction of a new branch, which serves as a relay to bridge the two processes. Given that our framework is model-agnostic, we apply it to the existing popular baselines and validate its effectiveness on the benchmark dataset. As demonstrated in the experimental results, when equipped with our method, these baselines all achieve consistent and significant performance improvements, evidently verifying the viability of processes coupling.

5.
IEEE Trans Image Process ; 29: 1-14, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31265394

RESUMEN

The prevailing characteristics of micro-videos result in the less descriptive power of each modality. The micro-video representations, several pioneer efforts proposed, are limited in implicitly exploring the consistency between different modality information but ignore the complementarity. In this paper, we focus on how to explicitly separate the consistent features and the complementary features from the mixed information and harness their combination to improve the expressiveness of each modality. Toward this end, we present a neural multimodal cooperative learning (NMCL) model to split the consistent component and the complementary component by a novel relation-aware attention mechanism. Specifically, the computed attention score can be used to measure the correlation between the features extracted from different modalities. Then, a threshold is learned for each modality to distinguish the consistent and complementary features according to the score. Thereafter, we integrate the consistent parts to enhance the representations and supplement the complementary ones to reinforce the information in each modality. As to the problem of redundant information, which may cause overfitting and is hard to distinguish, we devise an attention network to dynamically capture the features which closely related the category and output a discriminative representation for prediction. The experimental results on a real-world micro-video dataset show that the NMCL outperforms the state-of-the-art methods. Further studies verify the effectiveness and cooperative effects brought by the attentive mechanism.


Asunto(s)
Minería de Datos/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Algoritmos , Animales , Perros , Semántica , Grabación en Video
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA