Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros

Bases de dados
Tipo de documento
Assunto da revista
País de afiliação
Intervalo de ano de publicação
1.
IEEE Trans Image Process ; 18(8): 1772-81, 2009 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-19423442

RESUMO

Bitplane coding is a common strategy used in current image coding systems to perform lossy, or lossy-to-lossless, compression. There exist several studies and applications employing bitplane coding that require estimators to approximate the distortion produced when data are successively coded and transmitted. Such estimators usually assume that coefficients are uniformly distributed in the quantization interval. Even though this assumption simplifies estimation, it does not exactly correspond with the nature of the signal. This work introduces new estimators to approximate the distortion produced by the successive coding of transform coefficients in bitplane image coders, which have been determined through a precise approximation of the coefficients' distribution within the quantization intervals. Experimental results obtained in three applications suggest that the proposed estimators are able to approximate distortion with very high accuracy, providing a significant improvement over state-of-the-art results.

2.
IEEE Trans Med Imaging ; 38(1): 21-32, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-29994394

RESUMO

The use of whole-slide images (WSIs) in pathology entails stringent storage and transmission requirements because of their huge dimensions. Therefore, image compression is an essential tool to enable efficient access to these data. In particular, color transforms are needed to exploit the very high degree of inter-component correlation and obtain competitive compression performance. Even though the state-of-the-art color transforms remove some redundancy, they disregard important details of the compression algorithm applied after the transform. Therefore, their coding performance is not optimal. We propose an optimization method called mosaic optimization for designing irreversible and reversible color transforms simultaneously optimized for any given WSI and the subsequent compression algorithm. Mosaic optimization is designed to attain reasonable computational complexity and enable continuous scanner operation. Exhaustive experimental results indicate that, for JPEG 2000 at identical compression ratios, the optimized transforms yield images more similar to the original than the other state-of-the-art transforms. Specifically, irreversible optimized transforms outperform the Karhunen-Loève Transform in terms of PSNR (up to 1.1 dB), the HDR-VDP-2 visual distortion metric (up to 3.8 dB), and the accuracy of computer-aided nuclei detection tasks (F1 score up to 0.04 higher). In addition, reversible optimized transforms achieve PSNR, HDR-VDP-2, and nuclei detection accuracy gains of up to 0.9 dB, 7.1 dB, and 0.025, respectively, when compared with the reversible color transform in lossy-to-lossless compression regimes.


Assuntos
Compressão de Dados/métodos , Interpretação de Imagem Assistida por Computador/métodos , Algoritmos , Cor , Bases de Dados Factuais , Técnicas Histológicas , Humanos , Neoplasias/diagnóstico por imagem
3.
IEEE Trans Image Process ; 25(9): 4004-4017, 2016 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-28113430

RESUMO

The lossless intra-prediction coding modality of the High Efficiency Video Coding standard provides high coding performance while allowing frame-by-frame basis access to the coded data. This is of interest in many professional applications, such as medical imaging, automotive vision, and digital preservation in libraries and archives. Various improvements to lossless intra-prediction coding have been proposed recently, most of them based on sample-wise prediction using differential pulse code modulation (DPCM). Other recent proposals aim at further reducing the energy of intra-predicted residual blocks. However, the energy reduction achieved is frequently minimal due to the difficulty of correctly predicting the sign and magnitude of residual values. In this paper, we pursue a novel approach to this energy-reduction problem using piecewise mapping (pwm) functions. In particular, we analyze the range of values in residual blocks and apply accordingly a pwm function to map specific residual values to unique lower values. We encode the appropriate parameters associated with the pwm functions at the encoder, so that the corresponding inverse pwm functions at the decoder can map values back to the same residual values. These residual values are then used to reconstruct the original signal. This mapping is, therefore, reversible and introduces no losses. We evaluate the pwm functions on 4 × 4 residual blocks computed after DPCM-based prediction for lossless coding of a variety of camera-captured and screen content sequences. Evaluation results show that the pwm functions can attain the maximum bitrate reductions of 5.54% and 28.33% for screen content material compared with DPCM-based and block-wise intra-prediction, respectively. Compared with intra-block copy, piecewise mapping can attain the maximum bit-rate reductions of 11.48% for a camera-captured material.

4.
IEEE Trans Image Process ; 25(1): 209-19, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26441420

RESUMO

Image coding systems have been traditionally tailored for multiple instruction, multiple data (MIMD) computing. In general, they partition the (transformed) image in codeblocks that can be coded in the cores of MIMD-based processors. Each core executes a sequential flow of instructions to process the coefficients in the codeblock, independently and asynchronously from the others cores. Bitplane coding is a common strategy to code such data. Most of its mechanisms require sequential processing of the coefficients. The last years have seen the upraising of processing accelerators with enhanced computational performance and power efficiency whose architecture is mainly based on the single instruction, multiple data (SIMD) principle. SIMD computing refers to the execution of the same instruction to multiple data in a lockstep synchronous way. Unfortunately, current bitplane coding strategies cannot fully profit from such processors due to inherently sequential coding task. This paper presents bitplane image coding with parallel coefficient (BPC-PaCo) processing, a coding method that can process many coefficients within a codeblock in parallel and synchronously. To this end, the scanning order, the context formation, the probability model, and the arithmetic coder of the coding engine have been re-formulated. The experimental results suggest that the penalization in coding performance of BPC-PaCo with respect to the traditional strategies is almost negligible.

5.
IEEE Trans Image Process ; 24(1): 57-67, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25415984

RESUMO

Entropy is a measure of a message uncertainty. Among others aspects, it serves to determine the minimum coding rate that practical systems may attain. This paper defines an entropy-based measure to evaluate context models employed in wavelet-based image coding. The proposed measure is defined considering the mechanisms utilized by modern coding systems. It establishes the maximum performance achievable with each context model. This helps to determine the adequateness of the model under different coding conditions and serves to predict with high precision the coding rate achieved by practical systems. Experimental results evaluate four well-known context models using different types of images, coding rates, and transform strategies. They reveal that, under specific coding conditions, some widely-spread context models may not be as adequate as it is generally thought. The hints provided by this analysis may help to design simpler and more efficient wavelet-based image codecs.

6.
IEEE Trans Image Process ; 22(12): 4678-88, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-23955751

RESUMO

Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) together with a bitplane coding strategy. This paper introduces a 2-step scalar deadzone quantization (2SDQ) scheme that achieves same coding performance as that of USDQ while reducing the coding passes and the emitted symbols of the bitplane coding engine. This serves to reduce the computational costs of the codec and/or to code high dynamic range images. The main insights behind 2SDQ are the use of two quantization step sizes that approximate wavelet coefficients with more or less precision depending on their density, and a rate-distortion optimization technique that adjusts the distortion decreases produced when coding 2SDQ indexes. The integration of 2SDQ in current codecs is straightforward. The applicability and efficiency of 2SDQ are demonstrated within the framework of JPEG2000.

7.
IEEE Trans Image Process ; 21(4): 1920-33, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22128007

RESUMO

Scanning orders of bitplane image coding engines are commonly envisaged from theoretical or experimental insights and assessed in practice in terms of coding performance. This paper evaluates classic scanning strategies of modern bitplane image codecs using several theoretical-practical mechanisms conceived from rate-distortion theory. The use of these mechanisms allows distinguishing those features of the bitplane coder that are essential from those that are not. This discernment can aid the design of new bitplane coding engines with some special purposes and/or requirements. To emphasize this point, a low-complexity scanning strategy is proposed. Experimental evidence illustrates, assesses, and validates the proposed mechanisms and scanning orders.


Assuntos
Algoritmos , Compressão de Dados/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Sinais Assistido por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
8.
IEEE Trans Image Process ; 20(8): 2153-65, 2011 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-21324777

RESUMO

This paper introduces a probability model for symbols emitted by bitplane image coding engines, which is conceived from a precise characterization of the signal produced by a wavelet transform. Main insights behind the proposed model are the estimation of the magnitude of wavelet coefficients as the arithmetic mean of its neighbors' magnitude (the so-called local average), and the assumption that emitted bits are under-complete representations of the underlying signal. The local average-based probability model is introduced in the framework of JPEG2000. While the resulting system is not JPEG2000 compatible, it preserves all features of the standard. Practical benefits of our model are enhanced coding efficiency, more opportunities for parallelism, and improved spatial scalability.

9.
IEEE Trans Image Process ; 20(4): 1166-73, 2011 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-20875972

RESUMO

This work addresses the transmission of pre-encoded JPEG2000 video within a video-on-demand scenario. The primary requirement for the rate allocation algorithm deployed in the server is to match the real-time processing demands of the application. Scalability in terms of complexity must be provided to supply a valid solution by a given instant of time. The FAst rate allocation through STeepest descent (FAST) method introduced in this work selects an initial (and possibly poor) solution, and iteratively improves it until time is exhausted or the algorithm finishes execution. Experimental results suggest that FAST commonly achieves solutions close to the global optimum while employing very few computational resources.


Assuntos
Algoritmos , Redes de Comunicação de Computadores , Gráficos por Computador , Compressão de Dados/métodos , Multimídia , Processamento de Sinais Assistido por Computador , Gravação em Vídeo/métodos , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA