Your browser doesn't support javascript.
loading
Detecting images generated by diffusers.
Coccomini, Davide Alessandro; Esuli, Andrea; Falchi, Fabrizio; Gennaro, Claudio; Amato, Giuseppe.
Afiliação
  • Coccomini DA; Institute of Information Science and Technologies "Alessandro Faedo", Italian National Research Council, Pisa, Tuscany, Italy.
  • Esuli A; Information Engineering, University of Pisa, Pisa, Tuscany, Italy.
  • Falchi F; Institute of Information Science and Technologies "Alessandro Faedo", Italian National Research Council, Pisa, Tuscany, Italy.
  • Gennaro C; Institute of Information Science and Technologies "Alessandro Faedo", Italian National Research Council, Pisa, Tuscany, Italy.
  • Amato G; Institute of Information Science and Technologies "Alessandro Faedo", Italian National Research Council, Pisa, Tuscany, Italy.
PeerJ Comput Sci ; 10: e2127, 2024.
Article em En | MEDLINE | ID: mdl-39145210
ABSTRACT
In recent years, the field of artificial intelligence has witnessed a remarkable surge in the generation of synthetic images, driven by advancements in deep learning techniques. These synthetic images, often created through complex algorithms, closely mimic real photographs, blurring the lines between reality and artificiality. This proliferation of synthetic visuals presents a pressing challenge how to accurately and reliably distinguish between genuine and generated images. This article, in particular, explores the task of detecting images generated by text-to-image diffusion models, highlighting the challenges and peculiarities of this field. To evaluate this, we consider images generated from captions in the MSCOCO and Wikimedia datasets using two state-of-the-art models Stable Diffusion and GLIDE. Our experiments show that it is possible to detect the generated images using simple multi-layer perceptrons (MLPs), starting from features extracted by CLIP or RoBERTa, or using traditional convolutional neural networks (CNNs). These latter models achieve remarkable performances in particular when pretrained on large datasets. We also observe that models trained on images generated by Stable Diffusion can occasionally detect images generated by GLIDE, but only on the MSCOCO dataset. However, the reverse is not true. Lastly, we find that incorporating the associated textual information with the images in some cases can lead to a better generalization capability, especially if textual features are closely related to visual ones. We also discovered that the type of subject depicted in the image can significantly impact performance. This work provides insights into the feasibility of detecting generated images and has implications for security and privacy concerns in real-world applications. The code to reproduce our results is available at https//github.com/davide-coccomini/Detecting-Images-Generated-by-Diffusers.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: PeerJ Comput Sci Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Itália

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: PeerJ Comput Sci Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Itália