Your browser doesn't support javascript.
loading
One-shot many-to-many facial reenactment using Bi-Layer Graph Convolutional Networks.
Saeed, Uzair; Armghan, Ammar; Quanyu, Wang; Alenezi, Fayadh; Yue, Sun; Tiwari, Prayag.
Afiliación
  • Saeed U; Department of Computer Science and Technology, Beijing Institute of Technology, 5 Zhongguancun St, Haidian Qu, 100081, Beijing, China. Electronic address: uzairsaeed@bit.edu.cn.
  • Armghan A; Department of Electrical Engineering, College of Engineering, Jouf University, Sakaka, Saudi Arabia. Electronic address: aarmghan@ju.edu.sa.
  • Quanyu W; Department of Computer Science and Technology, Beijing Institute of Technology, 5 Zhongguancun St, Haidian Qu, 100081, Beijing, China. Electronic address: wangquanyu@bit.edu.cn.
  • Alenezi F; Department of Electrical Engineering, College of Engineering, Jouf University, Sakaka, Saudi Arabia. Electronic address: fshenezi@ju.edu.sa.
  • Yue S; Department of Computer Science and Technology, Beijing Institute of Technology, 5 Zhongguancun St, Haidian Qu, 100081, Beijing, China. Electronic address: sunyue@bit.edu.cn.
  • Tiwari P; School of Information Technology, Halmstad University, Sweden. Electronic address: prayag.tiwari@ieee.org.
Neural Netw ; 156: 193-204, 2022 Dec.
Article en En | MEDLINE | ID: mdl-36274526
ABSTRACT
Facial reenactment is aimed at animating a source face image into a new place using a driving facial picture. In a few shot scenarios, the present strategies are designed with one or more identities or identity-sustained suffering protection challenges. These current solutions are either developed with one or more identities in mind, or face identity protection issues in one or more shot situations. Multiple pictures from the same entity have been used in previous research to model facial reenactment. In contrast, this paper presents a novel model of one-shot many-to-many facial reenactments that uses only one facial image of a face. The proposed model produces a face that represents the objective representation of the same source identity. The proposed technique can simulate motion from a single image by decomposing an object into two layers. Using bi-layer with Convolutional Neural Network (CNN), we named our model Bi-Layer Graph Convolutional Layers (BGCLN) which utilized to create the latent vector's optical flow representation. This yields the precise structure and shape of the optical stream. Comprehensive studies suggest that our technique can produce high-quality results and outperform most recent techniques in both qualitative and quantitative data comparisons. Our proposed system can perform facial reenactment at 15 fps, which is approximately real time. Our code is publicly available at https//github.com/usaeed786/BGCLN.
Asunto(s)
Palabras clave

Texto completo: 1 Base de datos: MEDLINE Asunto principal: Redes Neurales de la Computación Tipo de estudio: Qualitative_research Idioma: En Revista: Neural Netw Asunto de la revista: NEUROLOGIA Año: 2022 Tipo del documento: Article

Texto completo: 1 Base de datos: MEDLINE Asunto principal: Redes Neurales de la Computación Tipo de estudio: Qualitative_research Idioma: En Revista: Neural Netw Asunto de la revista: NEUROLOGIA Año: 2022 Tipo del documento: Article