Your browser doesn't support javascript.
loading
Robust sound-guided image manipulation.
Lee, Seung Hyun; Chi, Hyung-Gun; Oh, Gyeongrok; Byeon, Wonmin; Yoon, Sang Ho; Park, Hyunje; Cho, Wonjun; Kim, Jinkyu; Kim, Sangpil.
Afiliação
  • Lee SH; Department of Artificial Intelligence, Korea University, South Korea.
  • Chi HG; School of Electrical and Computer Engineering, Purdue University, USA.
  • Oh G; Department of Artificial Intelligence, Korea University, South Korea.
  • Byeon W; NVIDIA Research, NVIDIA Corporation, USA.
  • Yoon SH; Graduate School of Culture Technology, KAIST, South Korea.
  • Park H; Department of Artificial Intelligence, Korea University, South Korea.
  • Cho W; Hanwha Systems, Hanwha Corporation, South Korea.
  • Kim J; Department of Computer Science and Engineering, Korea University, South Korea. Electronic address: jinkyukim@korea.ac.kr.
  • Kim S; Department of Artificial Intelligence, Korea University, South Korea. Electronic address: spk7@korea.ac.kr.
Neural Netw ; 175: 106271, 2024 Jul.
Article em En | MEDLINE | ID: mdl-38636319
ABSTRACT
Recent successes suggest that an image can be manipulated by a text prompt, e.g., a landscape scene on a sunny day is manipulated into the same scene on a rainy day driven by a text input "raining". These approaches often utilize a StyleCLIP-based image generator, which leverages multi-modal (text and image) embedding space. However, we observe that such text inputs are often bottlenecked in providing and synthesizing rich semantic cues, e.g., differentiating heavy rain from rain with thunderstorms. To address this issue, we advocate leveraging an additional modality, sound, which has notable advantages in image manipulation as it can convey more diverse semantic cues (vivid emotions or dynamic expressions of the natural world) than texts. In this paper, we propose a novel approach that first extends the image-text joint embedding space with sound and applies a direct latent optimization method to manipulate a given image based on audio input, e.g., the sound of rain. Our extensive experiments show that our sound-guided image manipulation approach produces semantically and visually more plausible manipulation results than the state-of-the-art text and sound-guided image manipulation methods, which are further confirmed by our human evaluations. Our downstream task evaluations also show that our learned image-text-sound joint embedding space effectively encodes sound inputs. Examples are provided in our project page https//kuai-lab.github.io/robust-demo/.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Som Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Som Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article