Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 23(8)2023 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-37112398

RESUMO

Perceptual encryption (PE) hides the identifiable information of an image in such a way that its intrinsic characteristics remain intact. This recognizable perceptual quality can be used to enable computation in the encryption domain. A class of PE algorithms based on block-level processing has recently gained popularity for their ability to generate JPEG-compressible cipher images. A tradeoff in these methods, however, is between the security efficiency and compression savings due to the chosen block size. Several methods (such as the processing of each color component independently, image representation, and sub-block-level processing) have been proposed to effectively manage this tradeoff. The current study adapts these assorted practices into a uniform framework to provide a fair comparison of their results. Specifically, their compression quality is investigated under various design parameters, such as the choice of colorspace, image representation, chroma subsampling, quantization tables, and block size. Our analyses have shown that at best the PE methods introduce a decrease of 6% and 3% in the JPEG compression performance with and without chroma subsampling, respectively. Additionally, their encryption quality is quantified in terms of several statistical analyses. The simulation results show that block-based PE methods exhibit several favorable properties for the encryption-then-compression schemes. Nonetheless, to avoid any pitfalls, their principal design should be carefully considered in the context of the applications for which we outlined possible future research directions.

2.
Sensors (Basel) ; 23(7)2023 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-37050460

RESUMO

This paper evaluates the effects of JPEG compression on image classification using the Vision Transformer (ViT). In recent years, many studies have been carried out to classify images in the encrypted domain for privacy preservation. Previously, the authors proposed an image classification method that encrypts both a trained ViT model and test images. Here, an encryption-then-compression system was employed to encrypt the test images, and the ViT model was preliminarily trained by plain images. The classification accuracy in the previous method was exactly equal to that without any encryption for the trained ViT model and test images. However, even though the encrypted test images can be compressible, the practical effects of JPEG, which is a typical lossy compression method, have not been investigated so far. In this paper, we extend our previous method by compressing the encrypted test images with JPEG and verify the classification accuracy for the compressed encrypted-images. Through our experiments, we confirm that the amount of data in the encrypted images can be significantly reduced by JPEG compression, while the classification accuracy of the compressed encrypted-images is highly preserved. For example, when the quality factor is set to 85, this paper shows that the classification accuracy can be maintained at over 98% with a more than 90% reduction in the amount of image data. Additionally, the effectiveness of JPEG compression is demonstrated through comparison with linear quantization. To the best of our knowledge, this is the first study to classify JPEG-compressed encrypted images without sacrificing high accuracy. Through our study, we have come to the conclusion that we can classify compressed encrypted-images without degradation to accuracy.

3.
Sensors (Basel) ; 23(5)2023 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-36904589

RESUMO

The Vision Transformer (ViT) architecture has been remarkably successful in image restoration. For a while, Convolutional Neural Networks (CNN) predominated in most computer vision tasks. Now, both CNN and ViT are efficient approaches that demonstrate powerful capabilities to restore a better version of an image given in a low-quality format. In this study, the efficiency of ViT in image restoration is studied extensively. The ViT architectures are classified for every task of image restoration. Seven image restoration tasks are considered: Image Super-Resolution, Image Denoising, General Image Enhancement, JPEG Compression Artifact Reduction, Image Deblurring, Removing Adverse Weather Conditions, and Image Dehazing. The outcomes, the advantages, the limitations, and the possible areas for future research are detailed. Overall, it is noted that incorporating ViT in the new architectures for image restoration is becoming a rule. This is due to some advantages compared to CNN, such as better efficiency, especially when more data are fed to the network, robustness in feature extraction, and a better feature learning approach that sees better the variances and characteristics of the input. Nevertheless, some drawbacks exist, such as the need for more data to show the benefits of ViT over CNN, the increased computational cost due to the complexity of the self-attention block, a more challenging training process, and the lack of interpretability. These drawbacks represent the future research direction that should be targeted to increase the efficiency of ViT in the image restoration domain.

4.
Sensors (Basel) ; 22(20)2022 Oct 21.
Artigo em Inglês | MEDLINE | ID: mdl-36298425

RESUMO

Perceptual encryption (PE) of images protects visual information while retaining the intrinsic properties necessary to enable computation in the encryption domain. Block-based PE produces JPEG-compliant images with almost the same compression savings as that of the plain images. The methods represent an input color image as a pseudo grayscale image to benefit from a smaller block size. However, such representation degrades image quality and compression savings, and removes color information, which limits their applications. To solve these limitations, we proposed inter and intra block processing for compressible PE methods (IIB-CPE). The method represents an input as a color image and performs block-level inter processing and sub-block-level intra processing on it. The intra block processing results in an inside-out geometric transformation that disrupts the symmetry of an entire block thus achieves visual encryption of local details while preserving the global contents of an image. The intra block-level processing allows the use of a smaller block size, which improves encryption efficiency without compromising compression performance. Our analyses showed that IIB-CPE offers 15% bitrate savings with better image quality than the existing PE methods. In addition, we extended the scope of applications of the proposed IIB-CPE to the privacy-preserving deep learning (PPDL) domain.


Assuntos
Compressão de Dados , Aprendizado Profundo , Privacidade , Segurança Computacional , Algoritmos , Compressão de Dados/métodos
5.
J Imaging ; 10(6)2024 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-38921615

RESUMO

We propose a neural-network-based watermarking method that introduces the quantized activation function that approximates the quantization of JPEG compression. Many neural-network-based watermarking methods have been proposed. Conventional methods have acquired robustness against various attacks by introducing an attack simulation layer between the embedding network and the extraction network. The quantization process of JPEG compression is replaced by the noise addition process in the attack layer of conventional methods. In this paper, we propose a quantized activation function that can simulate the JPEG quantization standard as it is in order to improve the robustness against the JPEG compression. Our quantized activation function consists of several hyperbolic tangent functions and is applied as an activation function for neural networks. Our network was introduced in the attack layer of ReDMark proposed by Ahmadi et al. to compare it with their method. That is, the embedding and extraction networks had the same structure. We compared the usual JPEG compressed images and the images applying the quantized activation function. The results showed that a network with quantized activation functions can approximate JPEG compression with high accuracy. We also compared the bit error rate (BER) of estimated watermarks generated by our network with those generated by ReDMark. We found that our network was able to produce estimated watermarks with lower BERs than those of ReDMark. Therefore, our network outperformed the conventional method with respect to image quality and BER.

6.
Multimed Tools Appl ; 82(9): 14153-14169, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36196270

RESUMO

The unprecedented growth in the easy availability of photo-editing tools has endangered the power of digital images. An image was supposed to be worth more than a thousand words, but now this can be said only if it can be authenticated or the integrity of the image can be proved to be intact. In this paper, we propose a digital image forensic technique for JPEG images. It can detect any forgery in the image if the forged portion called a ghost image is having a compression quality different from that of the cover image. It is based on resaving the JPEG image at different JPEG qualities, and the detection of the forged portion is maximum when it is saved at the same JPEG quality as the cover image. Also, we can precisely predict the JPEG quality of the cover image by analyzing the similarity using Structural Similarity Index Measure (SSIM) or the energy of the images. The first maxima in SSIM or the first minima in energy correspond to the cover image JPEG quality. We created a dataset for varying JPEG compression qualities of the ghost and the cover images and validated the scalability of the experimental results. We also, experimented with varied attack scenarios, e.g. high-quality ghost image embedded in low quality of cover image, low-quality ghost image embedded in high-quality of cover image, and ghost image and cover image both at the same quality. The proposed method is able to localize the tampered portions accurately even for forgeries as small as 10 × 10 sized pixel blocks. Our technique is also robust against other attack scenarios like copy-move forgery, inserting text into image, rescaling (zoom-out/zoom-in) ghost image and then pasting on cover image.

7.
J Imaging ; 9(7)2023 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-37504820

RESUMO

Thermography is probably the most used method of measuring surface temperature by analyzing radiation in the infrared part of the spectrum which accuracy depends on factors such as emissivity and reflected radiation. Contrary to popular belief that thermographic images represent temperature maps, they are actually thermal radiation converted into an image, and if not properly calibrated, they show incorrect temperatures. The objective of this study is to analyze commonly used image processing techniques and their impact on radiometric data in thermography. In particular, the extent to which a thermograph can be considered as an image and how image processing affects radiometric data. Three analyzes are presented in the paper. The first one examines how image processing techniques, such as contrast and brightness, affect physical reality and its representation in thermographic imaging. The second analysis examines the effects of JPEG compression on radiometric data and how degradation of the data varies with the compression parameters. The third analysis aims to determine the optimal resolution increase required to minimize the effects of compression on the radiometric data. The output from an IR camera in CSV format was used for these analyses, and compared to images from the manufacturer's software. The IR camera providing data in JPEG format was used, and the data included thermographic images, visible images, and a matrix of thermal radiation data. The study was verified with a reference blackbody radiation set at 60 °C. The results highlight the dangers of interpreting thermographic images as temperature maps without considering the underlying radiometric data which can be affected by image processing and compression. The paper concludes with the importance of accurate and precise thermographic analysis for reliable temperature measurement.

8.
Comput Methods Programs Biomed ; 202: 105969, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33631639

RESUMO

BACKGROUND AND OBJECTIVES: This paper reports a quantitative analysis of the effects of joint photographic experts group (JPEG) image compression of retinal fundus camera images on automatic vessel segmentation and on morphometric vascular measurements derived from it, including vessel width, tortuosity and fractal dimension. METHODS: Measurements are computed with vascular assessment and measurement platform for images of the retina (VAMPIRE), a specialized software application adopted in many international studies on retinal biomarkers. For reproducibility, we use three public archives of fundus images (digital retinal images for vessel extraction (DRIVE), automated retinal image analyzer (ARIA), high-resolution fundus (HRF)). We generate compressed versions of original images in a range of representative levels. RESULTS: We compare the resulting vessel segmentations with ground truth maps and morphological measurements of the vascular network with those obtained from the original (uncompressed) images. We assess the segmentation quality with sensitivity, specificity, accuracy, area under the curve and Dice coefficient. We assess the agreement between VAMPIRE measurements from compressed and uncompressed images with correlation, intra-class correlation and Bland-Altman analysis. CONCLUSIONS: Results suggest that VAMPIRE width-related measurements (central retinal artery equivalent (CRAE), central retinal vein equivalent (CRVE), arteriolar-venular width ratio (AVR)), the fractal dimension (FD) and arteriolar tortuosity have excellent agreement with those from the original images, remaining substantially stable even for strong loss of quality (20% of the original), suggesting the suitability of VAMPIRE in association studies with compressed images.


Assuntos
Compressão de Dados , Veia Retiniana , Fundo de Olho , Fotografação , Reprodutibilidade dos Testes , Vasos Retinianos/diagnóstico por imagem
9.
Math Biosci Eng ; 16(5): 5041-5061, 2019 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-31499703

RESUMO

Source camera identification has been well studied in laboratory environment where the training and test samples are all original images without recompression. However, image compression is quite common in the real world, when the training and test images are double JPEG compressed with different quantization tables, the identification accuracy of existing methods decreases dramati- cally. To address this challenge, we propose a novel iterative algorithm namely joint first and second order statistics matching (JSM) to learn a feature projection that projects the training and test fea- tures into a low dimensional subspace to reduce the shift caused by image recompression. Inspired by transfer learning, JSM aims to learn a new feature representation from original feature space by simultaneously matching the first and second order statistics between training and test features in a principled dimensionality reduction procedure. After the feature projection, the divergence between training and test features caused by recompression is reduced while the discriminative properties are preserved. Extensive experiments on public Dresden Image Database verify that JSM significantly outperforms several state-of-the-art methods on camera model identification of recompressed images.

10.
Forensic Sci Int ; 277: 133-147, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28648761

RESUMO

There is an immediate need to validate the authenticity of digital images due to the availability of powerful image processing tools that can easily manipulate the digital image information without leaving any traces. The digital image forensics most often employs the tampering detectors based on JPEG compression. Therefore, to evaluate the competency of the JPEG forensic detectors, an anti-forensic technique is required. In this paper, two improved JPEG anti-forensic techniques are proposed to remove the blocking artifacts left by the JPEG compression in both spatial and DCT domain. In the proposed framework, the grainy noise left by the perceptual histogram smoothing in DCT domain can be reduced significantly by applying the proposed de-noising operation. Two types of denoising algorithms are proposed, one is based on the constrained minimization problem of total variation of energy and other on the normalized weighted function. Subsequently, an improved TV based deblocking operation is proposed to eliminate the blocking artifacts in the spatial domain. Then, a decalibration operation is applied to bring the processed image statistics back to its standard position. The experimental results show that the proposed anti-forensic approaches outperform the existing state-of-the-art techniques in achieving enhanced tradeoff between image visual quality and forensic undetectability, but with high computational cost.

11.
J Forensic Sci ; 60(1): 197-205, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25442510

RESUMO

To prevent image forgeries, a number of forensic techniques for digital image have been developed that can detect an image's origin, trace its processing history, and can also locate the position of tampering. Especially, the statistical footprint left by JPEG compression operation can be a valuable source of information for the forensic analyst, and some image forensic algorithm have been raised based on the image statistics in the DCT domain. Recently, it has been shown that footprints can be removed by adding a suitable anti-forensic dithering signal to the image in the DCT domain, this results in invalid for some image forensic algorithms. In this paper, a novel anti-forensic algorithm is proposed, which is capable of concealing the quantization artifacts that left in the single JPEG compressed image. In the scheme, a chaos-based dither is added to an image's DCT coefficients to remove such artifacts. Effectiveness of both the scheme and the loss of image quality are evaluated through the experiments. The simulation results show that the proposed anti-forensic scheme can verify the reliability of the JPEG forensic tools.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA