Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Digit Imaging ; 36(4): 1826-1850, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37038039

RESUMO

The growing use of multimodal high-resolution volumetric data in pre-clinical studies leads to challenges related to the management and handling of the large amount of these datasets. Contrarily to the clinical context, currently there are no standard guidelines to regulate the use of image compression in pre-clinical contexts as a potential alleviation of this problem. In this work, the authors study the application of lossy image coding to compress high-resolution volumetric biomedical data. The impact of compression on the metrics and interpretation of volumetric data was quantified for a correlated multimodal imaging study to characterize murine tumor vasculature, using volumetric high-resolution episcopic microscopy (HREM), micro-computed tomography (µCT), and micro-magnetic resonance imaging (µMRI). The effects of compression were assessed by measuring task-specific performances of several biomedical experts who interpreted and labeled multiple data volumes compressed at different degrees. We defined trade-offs between data volume reduction and preservation of visual information, which ensured the preservation of relevant vasculature morphology at maximum compression efficiency across scales. Using the Jaccard Index (JI) and the average Hausdorff Distance (HD) after vasculature segmentation, we could demonstrate that, in this study, compression that yields to a 256-fold reduction of the data size allowed to keep the error induced by compression below the inter-observer variability, with minimal impact on the assessment of the tumor vasculature across scales.


Assuntos
Compressão de Dados , Neoplasias , Humanos , Animais , Camundongos , Compressão de Dados/métodos , Microtomografia por Raio-X , Imageamento por Ressonância Magnética , Imagem Multimodal , Processamento de Imagem Assistida por Computador/métodos
2.
Entropy (Basel) ; 25(1)2023 Jan 12.
Artigo em Inglês | MEDLINE | ID: mdl-36673299

RESUMO

This paper presents a lossless image compression method with a fast decoding time and flexible adjustment of coder parameters affecting its implementation complexity. A comparison of several approaches for computing non-MMSE prediction coefficients with different levels of complexity was made. The data modeling stage of the proposed codec was based on linear (calculated by the non-MMSE method) and non-linear (complemented by a context-dependent constant component removal block) predictions. Prediction error coding uses a two-stage compression: an adaptive Golomb code and a binary arithmetic code. The proposed solution results in 30% shorter decoding times and a lower bit average than competing solutions (by 7.9% relative to the popular JPEG-LS codec).

3.
Sensors (Basel) ; 22(20)2022 Oct 21.
Artigo em Inglês | MEDLINE | ID: mdl-36298425

RESUMO

Perceptual encryption (PE) of images protects visual information while retaining the intrinsic properties necessary to enable computation in the encryption domain. Block-based PE produces JPEG-compliant images with almost the same compression savings as that of the plain images. The methods represent an input color image as a pseudo grayscale image to benefit from a smaller block size. However, such representation degrades image quality and compression savings, and removes color information, which limits their applications. To solve these limitations, we proposed inter and intra block processing for compressible PE methods (IIB-CPE). The method represents an input as a color image and performs block-level inter processing and sub-block-level intra processing on it. The intra block processing results in an inside-out geometric transformation that disrupts the symmetry of an entire block thus achieves visual encryption of local details while preserving the global contents of an image. The intra block-level processing allows the use of a smaller block size, which improves encryption efficiency without compromising compression performance. Our analyses showed that IIB-CPE offers 15% bitrate savings with better image quality than the existing PE methods. In addition, we extended the scope of applications of the proposed IIB-CPE to the privacy-preserving deep learning (PPDL) domain.


Assuntos
Compressão de Dados , Aprendizado Profundo , Privacidade , Segurança Computacional , Algoritmos , Compressão de Dados/métodos
4.
Sensors (Basel) ; 22(3)2022 Jan 21.
Artigo em Inglês | MEDLINE | ID: mdl-35161566

RESUMO

Machine learning is revolutionizing the way multimedia information is processed and transmitted to users. After intensive and powerful training, some impressive efficiency/accuracy improvements have been made all over the transmission pipeline. For example, the high model capacity of the learning-based architectures enables us to accurately model the image and video behavior such that tremendous compression gains can be achieved. Similarly, error concealment, streaming strategy or even user perception modeling have widely benefited from the recent learning-oriented developments. However, learning-based algorithms often imply drastic changes to the way data are represented or consumed, meaning that the overall pipeline can be affected even though a subpart of it is optimized. In this paper, we review the recent major advances that have been proposed all across the transmission chain, and we discuss their potential impact and the research challenges that they raise.


Assuntos
Interpretação de Imagem Assistida por Computador , Multimídia , Aprendizado de Máquina , Processamento de Sinais Assistido por Computador , Gravação em Vídeo
5.
Sensors (Basel) ; 20(11)2020 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-32466401

RESUMO

Biomedical planar imaging using gamma radiation is a very important screening tool for medical diagnostics. Since lens imaging is not available in gamma imaging, the current methods use lead collimator or pinhole techniques to perform imaging. However, due to ineffective utilization of the gamma radiation emitted from the patient's body and the radioactive dose limit in patients, poor image signal to noise ratio (SNR) and long image capturing time are evident. Furthermore, the resolution is related to the pinhole diameter, thus there is a tradeoff between SNR and resolution. Our objectives are to reduce the radioactive dose given to the patient and to preserve or improve SNR, resolution and capturing time while incorporating three-dimensional capabilities in existing gamma imaging systems. The proposed imaging system is based on super-resolved time-multiplexing methods using both variable and moving pinhole arrays. Simulations were performed both in MATLAB and GEANT4, and gamma single photon emission computed tomography (SPECT) experiments were conducted to support theory and simulations. The proposed method is able to reduce the radioactive dose and image capturing time and to improve SNR and resolution. The results and method enhance the gamma imaging capabilities that exist in current systems, while providing three-dimensional data on the object.


Assuntos
Raios gama , Cintilografia , Tomografia Computadorizada de Emissão de Fóton Único , Humanos , Imagens de Fantasmas , Razão Sinal-Ruído
6.
Entropy (Basel) ; 22(9)2020 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-33286688

RESUMO

In this paper, the most efficient (from data compaction point of view) and current image lossless coding method is presented. Being computationally complex, the algorithm is still more time efficient than its main competitors. The presented cascaded method is based on the Weighted Least Square (WLS) technique, with many improvements introduced, e.g., its main stage is followed by a two-step NLMS predictor ended with Context-Dependent Constant Component Removing. The prediction error is coded by a highly efficient binary context arithmetic coder. The performance of the new algorithm is compared to that of other coders for a set of widely used benchmark images.

7.
Sensors (Basel) ; 19(13)2019 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-31277217

RESUMO

An analog joint source-channel coding (JSCC) system designed for the transmission of still images is proposed and its performance is compared to that of two digital alternatives which differ in the source encoding operation: Joint Photographic Experts Group (JPEG) and JPEG without entropy coding (JPEGw/oEC), respectively, both relying on an optimized channel encoder-modulator tandem. Apart from a visual comparison, the figures of merit considered in the assessment are the structural similarity (SSIM) index and the time required to transmit an image through additive white Gaussian noise (AWGN) and Rayleigh channels. This work shows that the proposed analog system exhibits a performance similar to that of the digital scheme based on JPEG compression with a noticeable better visual degradation to the human eye, a lower computational complexity, and a negligible delay. These results confirm the suitability of analog JSCC for the transmission of still images in scenarios with severe constraints on power consumption, computational capabilities, and for real-time applications. For these reasons the proposed system is a good candidate for surveillance systems, low-constrained devices, Internet of things (IoT) applications, etc.

8.
Sensors (Basel) ; 18(4)2018 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-29673189

RESUMO

Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

9.
Proc IEEE Inst Electr Electron Eng ; 101(9): 2058-2067, 2013 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-24489403

RESUMO

Making technological advances in the field of human-machine interactions requires that the capabilities and limitations of the human perceptual system are taken into account. The focus of this report is an important mechanism of perception, visual selective attention, which is becoming more and more important for multimedia applications. We introduce the concept of visual attention and describe its underlying mechanisms. In particular, we introduce the concepts of overt and covert visual attention, and of bottom-up and top-down processing. Challenges related to modeling visual attention and their validation using ad hoc ground truth are also discussed. Examples of the usage of visual attention models in image and video processing are presented. We emphasize multimedia delivery, retargeting and quality assessment of image and video, medical imaging, and the field of stereoscopic 3D images applications.

10.
J Imaging ; 7(7)2021 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-39080905

RESUMO

The JPEG format, consisting of a set of image compression techniques, is one of the most commonly used image coding standards for both lossy and lossless image encoding. In this format, various techniques are used to improve image transmission and storage. In the final step of lossy image coding, JPEG uses either arithmetic or Huffman entropy coding modes to further compress data processed by lossy compression. Both modes encode all the 8 × 8 DCT blocks without filtering empty ones. An end-of-block marker is coded for empty blocks, and these empty blocks cause an unnecessary increase in file size when they are stored with the rest of the data. In this paper, we propose a modified version of the JPEG entropy coding. In the proposed version, instead of storing an end-of-block code for empty blocks with the rest of the data, we store their location in a separate buffer and then compress the buffer with an efficient lossless method to achieve a higher compression ratio. The size of the additional buffer, which keeps the information of location for the empty and non-empty blocks, was considered during the calculation of bits per pixel for the test images. In image compression, peak signal-to-noise ratio versus bits per pixel has been a major measure for evaluating the coding performance. Experimental results indicate that the proposed modified algorithm achieves lower bits per pixel while retaining quality.

11.
Healthc Technol Lett ; 6(6): 271-274, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-32038870

RESUMO

Kidney stones are a common urologic condition with a high amount of recurrence. Recurrence depends on a multitude of factors the incidence of precursors to kidney stones, plugs, and plaques. One method of characterising the stone precursors is endoscopic assessment, though it is manual and time-consuming. Deep learning has become a popular technique for semantic segmentation because of the high accuracy that has been demonstrated. The present Letter examined the efficacy of deep learning to segment the renal papilla, plaque, and plugs. A U-Net model with ResNet-34 encoder was tested; the Letter examined dropout (to avoid overtraining) and two different loss functions (to address the class imbalance problem. The models were then trained in 1666 images and tested on 185 images. The Jaccard-cross-entropy loss function was more effective than the focal loss function. The model with the dropout rate 0.4 was found to be more effective due to its generalisability. The model was largely successful at delineating the papilla. The model was able to correctly detect the plaques and plugs; however, small plaques were challenging. Deep learning was found to be applicable for segmentation of an endoscopic image for the papilla, plaque, and plug, with room for improvement.

12.
Healthc Technol Lett ; 1(2): 74-9, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-26609382

RESUMO

E-medicine is a process to provide health care services to people using the Internet or any networking technology. In this Letter, a new idea is proposed to model the physical structure of the e-medicine system to better provide offline health care services. Smart cards are used to authenticate the user singly. A very unique technique is also suggested to verify the card owner's identity and to embed secret data to the card while providing patients' reports either at booths or at the e-medicine server system. The simulation results of card authentication and embedding procedure justify the proposed implementation.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA