Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Sensors (Basel) ; 23(22)2023 Nov 08.
Artigo em Inglês | MEDLINE | ID: mdl-38005425

RESUMO

Generative AI has gained enormous interest nowadays due to new applications like ChatGPT, DALL E, Stable Diffusion, and Deepfake. In particular, DALL E, Stable Diffusion, and others (Adobe Firefly, ImagineArt, etc.) can create images from a text prompt and are even able to create photorealistic images. Due to this fact, intense research has been performed to create new image forensics applications able to distinguish between real captured images and videos and artificial ones. Detecting forgeries made with Deepfake is one of the most researched issues. This paper is about another kind of forgery detection. The purpose of this research is to detect photorealistic AI-created images versus real photos coming from a physical camera. Id est, making a binary decision over an image, asking whether it is artificially or naturally created. Artificial images do not need to try to represent any real object, person, or place. For this purpose, techniques that perform a pixel-level feature extraction are used. The first one is Photo Response Non-Uniformity (PRNU). PRNU is a special noise due to imperfections on the camera sensor that is used for source camera identification. The underlying idea is that AI images will have a different PRNU pattern. The second one is error level analysis (ELA). This is another type of feature extraction traditionally used for detecting image editing. ELA is being used nowadays by photographers for the manual detection of AI-created images. Both kinds of features are used to train convolutional neural networks to differentiate between AI images and real photographs. Good results are obtained, achieving accuracy rates of over 95%. Both extraction methods are carefully assessed by computing precision/recall and F1-score measurements.

2.
Sensors (Basel) ; 23(7)2023 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-37050522

RESUMO

In the field of forensic imaging, it is important to be able to extract a camera fingerprint from one or a small set of images known to have been taken by the same camera (or image sensor). Note that we are using the word fingerprint because it is a piece of information extracted from images that can be used to identify an individual source camera. This technique is very important for certain security and digital forensic situations. Camera fingerprint is based on a certain kind of random noise present in all image sensors that is due to manufacturing imperfections and is, thus, unique and impossible to avoid. Photo response nonuniformity (PRNU) has become the most widely used method for source camera identification (SCI). In this paper, a set of attacks is designed and applied to a PRNU-based SCI system, and the success of each method is systematically assessed both in the case of still images and in the case of video. An attack method is defined as any processing that minimally alters image quality and is designed to fool PRNU detectors or, in general, any camera fingerprint detector. The success of an attack is assessed as the increment in the error rate of the SCI system. The PRNU-based SCI system was taken from an outstanding reference that is publicly available. Among the results of this work, the following are remarkable: the use of a systematic and extensive procedure to test SCI methods, very thorough testing of PRNU with more than 2000 test images, and the finding of some very effective attacks on PRNU-based SCI.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA