Your browser doesn't support javascript.
loading
StyleTalk++: A Unified Framework for Controlling the Speaking Styles of Talking Heads.
IEEE Trans Pattern Anal Mach Intell ; 46(6): 4331-4347, 2024 Jun.
Article em En | MEDLINE | ID: mdl-38265906
ABSTRACT
Individuals have unique facial expression and head pose styles that reflect their personalized speaking styles. Existing one-shot talking head methods cannot capture such personalized characteristics and therefore fail to produce diverse speaking styles in the final videos. To address this challenge, we propose a one-shot style-controllable talking face generation method that can obtain speaking styles from reference speaking videos and drive the one-shot portrait to speak with the reference speaking styles and another piece of audio. Our method aims to synthesize the style-controllable coefficients of a 3D Morphable Model (3DMM), including facial expressions and head movements, in a unified framework. Specifically, the proposed framework first leverages a style encoder to extract the desired speaking styles from the reference videos and transform them into style codes. Then, the framework uses a style-aware decoder to synthesize the coefficients of 3DMM from the audio input and style codes. During decoding, our framework adopts a two-branch architecture, which generates the stylized facial expression coefficients and stylized head movement coefficients, respectively. After obtaining the coefficients of 3DMM, an image renderer renders the expression coefficients into a specific person's talking-head video. Extensive experiments demonstrate that our method generates visually authentic talking head videos with diverse speaking styles from only one portrait image and an audio clip.
Assuntos

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Fala / Gravação em Vídeo / Movimentos da Cabeça / Expressão Facial Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Fala / Gravação em Vídeo / Movimentos da Cabeça / Expressão Facial Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article