Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters











Database
Language
Publication year range
1.
J Imaging ; 9(6)2023 Jun 19.
Article in English | MEDLINE | ID: mdl-37367470

ABSTRACT

The widespread use of deep learning techniques for creating realistic synthetic media, commonly known as deepfakes, poses a significant threat to individuals, organizations, and society. As the malicious use of these data could lead to unpleasant situations, it is becoming crucial to distinguish between authentic and fake media. Nonetheless, though deepfake generation systems can create convincing images and audio, they may struggle to maintain consistency across different data modalities, such as producing a realistic video sequence where both visual frames and speech are fake and consistent one with the other. Moreover, these systems may not accurately reproduce semantic and timely accurate aspects. All these elements can be exploited to perform a robust detection of fake content. In this paper, we propose a novel approach for detecting deepfake video sequences by leveraging data multimodality. Our method extracts audio-visual features from the input video over time and analyzes them using time-aware neural networks. We exploit both the video and audio modalities to leverage the inconsistencies between and within them, enhancing the final detection performance. The peculiarity of the proposed method is that we never train on multimodal deepfake data, but on disjoint monomodal datasets which contain visual-only or audio-only deepfakes. This frees us from leveraging multimodal datasets during training, which is desirable given their lack in the literature. Moreover, at test time, it allows to evaluate the robustness of our proposed detector on unseen multimodal deepfakes. We test different fusion techniques between data modalities and investigate which one leads to more robust predictions by the developed detectors. Our results indicate that a multimodal approach is more effective than a monomodal one, even if trained on disjoint monomodal datasets.

2.
Sci Rep ; 12(1): 18306, 2022 10 31.
Article in English | MEDLINE | ID: mdl-36316363

ABSTRACT

A great deal of the images found in scientific publications are retouched, reused, or composed to enhance the quality of the presentation. In most instances, these edits are benign and help the reader better understand the material in a paper. However, some edits are instances of scientific misconduct and undermine the integrity of the presented research. Determining the legitimacy of edits made to scientific images is an open problem that no current technology can perform satisfactorily in a fully automated fashion. It thus remains up to human experts to inspect images as part of the peer-review process. Nonetheless, image analysis technologies promise to become helpful to experts to perform such an essential yet arduous task. Therefore, we introduce SILA, a system that makes image analysis tools available to reviewers and editors in a principled way. Further, SILA is the first human-in-the-loop end-to-end system that starts by processing article PDF files, performs image manipulation detection on the automatically extracted figures, and ends with image provenance graphs expressing the relationships between the images in question, to explain potential problems. To assess its efficacy, we introduce a dataset of scientific papers from around the globe containing annotated image manipulations and inadvertent reuse, which can serve as a benchmark for the problem at hand. Qualitative and quantitative results of the system are described using this dataset.


Subject(s)
Image Processing, Computer-Assisted , Scientific Misconduct , Humans , Publications
3.
J Imaging ; 7(8)2021 Aug 05.
Article in English | MEDLINE | ID: mdl-34460771

ABSTRACT

Identifying the source camera of images and videos has gained significant importance in multimedia forensics. It allows tracing back data to their creator, thus enabling to solve copyright infringement cases and expose the authors of hideous crimes. In this paper, we focus on the problem of camera model identification for video sequences, that is, given a video under analysis, detecting the camera model used for its acquisition. To this purpose, we develop two different CNN-based camera model identification methods, working in a novel multi-modal scenario. Differently from mono-modal methods, which use only the visual or audio information from the investigated video to tackle the identification task, the proposed multi-modal methods jointly exploit audio and visual information. We test our proposed methodologies on the well-known Vision dataset, which collects almost 2000 video sequences belonging to different devices. Experiments are performed, considering native videos directly acquired by their acquisition devices and videos uploaded on social media platforms, such as YouTube and WhatsApp. The achieved results show that the proposed multi-modal approaches significantly outperform their mono-modal counterparts, representing a valuable strategy for the tackled problem and opening future research to even more challenging scenarios.

4.
IEEE Trans Image Process ; 25(5): 2298-310, 2016 May.
Article in English | MEDLINE | ID: mdl-26992023

ABSTRACT

Video content is routinely acquired and distributed in a digital compressed format. In many cases, the same video content is encoded multiple times. This is the typical scenario that arises when a video, originally encoded directly by the acquisition device, is then re-encoded, either after an editing operation, or when uploaded to a sharing website. The analysis of the bitstream reveals details of the last compression step (i.e., the codec adopted and the corresponding encoding parameters), while masking the previous compression history. Therefore, in this paper, we consider a processing chain of two coding steps, and we propose a method that exploits coding-based footprints to identify both the codec and the size of the group of pictures (GOPs) used in the first coding step. This sort of analysis is useful in video forensics, when the analyst is interested in determining the characteristics of the originating source device, and in video quality assessment, since quality is determined by the whole compression history. The proposed method relies on the fact that lossy coding is an (almost) idempotent operation. That is, re-encoding a video sequence with the same codec and coding parameters produces a sequence that is similar to the former. As a consequence, if the second codec in the chain does not significantly alter the sequence, it is possible to analyze this sort of similarity to identify the first codec and the adopted GOP size. The method was extensively validated on a very large data set of video sequences generated by encoding content with a diversity of codecs (MPEG-2, MPEG-4, H.264/AVC, and DIRAC) and different encoding parameters. In addition, a proof of concept showing that the proposed method can also be used on videos downloaded from YouTube is reported.

SELECTION OF CITATIONS
SEARCH DETAIL