Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
IEEE Trans Image Process ; 32: 5794-5807, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37843991

RESUMO

Talking face generation is the process of synthesizing a lip-synchronized video when given a reference portrait and an audio clip. However, generating a fine-grained talking video is nontrivial due to several challenges: 1) capturing vivid facial expressions, such as muscle movements; 2) ensuring smooth transitions between consecutive frames; and 3) preserving the details of the reference portrait. Existing efforts have only focused on modeling rigid lip movements, resulting in low-fidelity videos with jerky facial muscle deformations. To address these challenges, we propose a novel Fine-gRained mOtioN moDel (FROND), consisting of three components. In the first component, we adopt a two-stream encoder to capture local facial movement keypoints and embed their overall motion context as the global code. In the second component, we design a motion estimation module to predict audio-driven movements. This enables the learning of local key point motion in the continuous trajectory space to achieve smooth temporal facial movements. Additionally, the local and global motions are fused to estimate a continuous dense motion field, resulting in spatially smooth movements. In the third component, we devise a novel implicit image decoder based on an implicit neural network. This decoder recovers high-frequency information from the input image, resulting in a high-fidelity talking face. In summary, the FROND refines the motion trajectories of facial keypoints into a continuous dense motion field, which is followed by a decoder that fully exploits the inherent smoothness of the motion. We conduct quantitative and qualitative model evaluations on benchmark datasets. The experimental results show that our proposed FROND significantly outperforms several state-of-the-art baselines.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA