Your browser doesn't support javascript.
loading
TalkingStyle: Personalized Speech-Driven 3D Facial Animation with Style Preservation.
Article en En | MEDLINE | ID: mdl-38861445
ABSTRACT
It is a challenging task to create realistic 3D avatars that accurately replicate individuals' speech and unique talking styles for speech-driven facial animation. Existing techniques have made remarkable progress but still struggle to achieve lifelike mimicry. This paper proposes "TalkingStyle", a novel method to generate personalized talking avatars while retaining the talking style of the person. Our approach uses a set of audio and animation samples from an individual to create new facial animations that closely resemble their specific talking style, synchronized with speech. We disentangle the style codes from the motion patterns, allowing our method to associate a distinct identifier with each person. To manage each aspect effectively, we employ three separate encoders for style, speech, and motion, ensuring the preservation of the original style while maintaining consistent motion in our stylized talking avatars. Additionally, we propose a new style-conditioned transformer decoder, offering greater flexibility and control over the facial avatar styles. We comprehensively evaluate TalkingStyle through qualitative and quantitative assessments, as well as user studies demonstrating its superior realism and lip synchronization accuracy compared to current state-of-the-art methods. To promote transparency and further advancements in the field, we also make the source code publicly available at https//github.com/wangxuanx/TalkingStyle.

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: IEEE Trans Vis Comput Graph Asunto de la revista: INFORMATICA MEDICA Año: 2024 Tipo del documento: Article

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: IEEE Trans Vis Comput Graph Asunto de la revista: INFORMATICA MEDICA Año: 2024 Tipo del documento: Article