Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Sensors (Basel) ; 23(4)2023 Feb 17.
Artigo em Inglês | MEDLINE | ID: mdl-36850867

RESUMO

Compressive sensing (CS) has been proposed as a disruptive approach to developing a novel class of optical instrumentation used in diverse application domains. Thanks to sparsity as an inherent feature of many natural signals, CS allows for the acquisition of the signal in a very compact way, merging acquisition and compression in a single step and, furthermore, offering the capability of using a limited number of detector elements to obtain a reconstructed image with a larger number of pixels. Although the CS paradigm has already been applied in several application domains, from medical diagnostics to microscopy, studies related to space applications are very limited. In this paper, we present and discuss the instrumental concept, optical design, and performances of a CS imaging spectrometer for ultraviolet-visible (UV-Vis) stellar spectroscopy. The instrument-which is pixel-limited in the entire 300 nm-650 nm spectral range-features spectral sampling that ranges from 2.2 nm@300 nm to 22 nm@650 nm, with a total of 50 samples for each spectrum. For data reconstruction quality, the results showed good performance, measured by several quality metrics chosen from those recommended by CCSDS. The designed instrument can achieve compression ratios of 20 or higher without a significant loss of information. A pros and cons analysis of the CS approach is finally carried out, highlighting main differences with respect to a traditional system.

2.
IEEE Trans Neural Netw Learn Syst ; 34(8): 4610-4619, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34653010

RESUMO

Graph neural networks (GNNs) have become a staple in problems addressing learning and analysis of data defined over graphs. However, several results suggest an inherent difficulty in extracting better performance by increasing the number of layers. Recent works attribute this to a phenomenon peculiar to the extraction of node features in graph-based tasks, i.e., the need to consider multiple neighborhood sizes at the same time and adaptively tune them. In this article, we investigate the recently proposed randomly wired architectures in the context of GNNs. Instead of building deeper networks by stacking many layers, we prove that employing a randomly wired architecture can be a more effective way to increase the capacity of the network and obtain richer representations. We show that such architectures behave like an ensemble of paths, which are able to merge contributions from receptive fields of varied size. Moreover, these receptive fields can also be modulated to be wider or narrower through the trainable weights over the paths. We also provide extensive experimental evidence of the superior performance of randomly wired architectures over multiple tasks and five graph convolution definitions, using recent benchmarking frameworks that address the reliability of previous testing methodologies.

3.
Artigo em Inglês | MEDLINE | ID: mdl-32755859

RESUMO

Non-local self-similarity is well-known to be an effective prior for the image denoising problem. However, little work has been done to incorporate it in convolutional neural networks, which surpass non-local model-based methods despite only exploiting local information. In this paper, we propose a novel end-to-end trainable neural network architecture employing layers based on graph convolution operations, thereby creating neurons with non-local receptive fields. The graph convolution operation generalizes the classic convolution to arbitrary graphs. In this work, the graph is dynamically computed from similarities among the hidden features of the network, so that the powerful representation learning capabilities of the network are exploited to uncover self-similar patterns. We introduce a lightweight Edge-Conditioned Convolution which addresses vanishing gradient and over-parameterization issues of this particular graph convolution. Extensive experiments show state-of-the-art performance with improved qualitative and quantitative results on both synthetic Gaussian noise and real noise.

4.
EURASIP J Adv Signal Process ; 2018(1): 56, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30956656

RESUMO

The aim of this paper is to develop strategies to estimate the sparsity degree of a signal from compressive projections, without the burden of recovery. We consider both the noise-free and the noisy settings, and we show how to extend the proposed framework to the case of non-exactly sparse signals. The proposed method employs γ-sparsified random matrices and is based on a maximum likelihood (ML) approach, exploiting the property that the acquired measurements are distributed according to a mixture model whose parameters depend on the signal sparsity. In the presence of noise, given the complexity of ML estimation, the probability model is approximated with a two-component Gaussian mixture (2-GMM), which can be easily learned via expectation-maximization. Besides the design of the method, this paper makes two novel contributions. First, in the absence of noise, sufficient conditions on the number of measurements are provided for almost sure exact estimation in different regimes of behavior, defined by the scaling of the measurements sparsity γ and the signal sparsity. In the presence of noise, our second contribution is to prove that the 2-GMM approximation is accurate in the large system limit for a proper choice of γ parameter. Simulations validate our predictions and show that the proposed algorithms outperform the state-of-the-art methods for sparsity estimation. Finally, the estimation strategy is applied to non-exactly sparse signals. The results are very encouraging, suggesting further extension to more general frameworks.

5.
EURASIP J Adv Signal Process ; 2018(1): 46, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30996728

RESUMO

In this paper, we propose a new method for support detection and estimation of sparse and approximately sparse signals from compressed measurements. Using a double Laplace mixture model as the parametric representation of the signal coefficients, the problem is formulated as a weighted ℓ 1 minimization. Then, we introduce a new family of iterative shrinkage-thresholding algorithms based on double Laplace mixture models. They preserve the computational simplicity of classical ones and improve iterative estimation by incorporating soft support detection. In particular, at each iteration, by learning the components that are likely to be nonzero from the current MAP signal estimate, the shrinkage-thresholding step is adaptively tuned and optimized. Unlike other adaptive methods, we are able to prove, under suitable conditions, the convergence of the proposed methods to a local minimum of the weighted ℓ 1 minimization. Moreover, we also provide an upper bound on the reconstruction error. Finally, we show through numerical experiments that the proposed methods outperform classical shrinkage-thresholding in terms of rate of convergence, accuracy, and of sparsity-undersampling trade-off.

6.
IEEE Trans Image Process ; 26(6): 2656-2668, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-28333629

RESUMO

In this paper, we introduce new gradient-based methods for image recovery from a small collection of spectral coefficients of the Fourier transform, which is of particular interest for several scanning technologies, such as magnetic resonance imaging. Since gradients of a medical image are much more sparse or compressible than the corresponding image, classical l1 -minimization methods have been used to recover these relative differences. The image values can then be obtained by integration algorithms imposing boundary constraints. Compared with classical gradient recovery methods, we propose two new techniques that improve reconstruction. First, we cast the gradient recovery problem as a compressed sensing problem taking into account that the curl of the gradient field should be zero. Second, inspired by the emerging field of signal processing on graphs, we formulate the gradient recovery problem as an inverse problem on graphs. Iteratively reweighted l1 recovery methods are proposed to recover these relative differences and the structure of the similarity graph. Once the gradient field is estimated, the image is recovered from the compressed Fourier measurements using least squares estimation. Numerical experiments show that the proposed approach outperforms the state-of-the-art image recovery methods.

7.
IEEE Trans Image Process ; 26(1): 303-314, 2017 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-27831877

RESUMO

In image compression, classical block-based separable transforms tend to be inefficient when image blocks contain arbitrarily shaped discontinuities. For this reason, transforms incorporating directional information are an appealing alternative. In this paper, we propose a new approach to this problem, namely, a discrete cosine transform (DCT) that can be steered in any chosen direction. Such transform, called steerable DCT (SDCT), allows to rotate in a flexible way pairs of basis vectors, and enables precise matching of directionality in each image block, achieving improved coding efficiency. The optimal rotation angles for SDCT can be represented as solution of a suitable rate-distortion (RD) problem. We propose iterative methods to search such solution, and we develop a fully fledged image encoder to practically compare our techniques with other competing transforms. Analytical and numerical results prove that SDCT outperforms both DCT and state-of-the-art directional transforms.

8.
IEEE Trans Image Process ; 25(11): 5077-5087, 2016 11.
Artigo em Inglês | MEDLINE | ID: mdl-27552755

RESUMO

Compressed sensing (CS) is a fast and efficient way to obtain compact signal representations. Oftentimes, one wishes to extract some information from the available compressed signal. Since CS signal recovery is typically expensive from a computational point of view, it is inconvenient to first recover the signal and then extract the information. A much more effective approach consists in estimating the information directly from the signal's linear measurements. In this paper, we propose a novel framework for compressive estimation of autoregressive (AR) process parameters based on ad hoc sensing matrix construction. More in detail, we introduce a compressive least square estimator for AR(p) parameters and a specific AR(1) compressive Bayesian estimator. We exploit the proposed techniques to address two important practical problems. The first is compressive covariance estimation for Toeplitz structured covariance matrices where we tackle the problem with a novel parametric approach based on the estimated AR parameters. The second is a block-based compressive imaging system, where we introduce an algorithm that adaptively calculates the number of measurements to be acquired for each block from a set of initial measurements based on its degree of compressibility. We show that the proposed techniques outperform the state-of-the-art methods for these two problems.

9.
IEEE Trans Image Process ; 19(6): 1491-503, 2010 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-20215084

RESUMO

Digital fountain codes have emerged as a low-complexity alternative to Reed-Solomon codes for erasure correction. The applications of these codes are relevant especially in the field of wireless video, where low encoding and decoding complexity is crucial. In this paper, we introduce a new class of digital fountain codes based on a sliding-window approach applied to Raptor codes. These codes have several properties useful for video applications, and provide better performance than classical digital fountains. Then, we propose an application of sliding-window Raptor codes to wireless video broadcasting using scalable video coding. The rates of the base and enhancement layers, as well as the number of coded packets generated for each layer, are optimized so as to yield the best possible expected quality at the receiver side, and providing unequal loss protection to the different layers according to their importance. The proposed system has been validated in a UMTS broadcast scenario, showing that it improves the end-to-end quality, and is robust towards fluctuations in the packet loss rate.


Assuntos
Algoritmos , Artefatos , Compressão de Dados/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Sinais Assistido por Computador , Telecomunicações , Reconhecimento Automatizado de Padrão/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
10.
IEEE Trans Image Process ; 15(4): 807-18, 2006 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-16579370

RESUMO

JPEG 2000 is the novel ISO standard for image and video coding. Besides its improved coding efficiency, it also provides a few error resilience tools in order to limit the effect of errors in the codestream, which can occur when the compressed image or video data are transmitted over an error-prone channel, as typically occurs in wireless communication scenarios. However, for very harsh channels, these tools often do not provide an adequate degree of error protection. In this paper, we propose a novel error-resilience tool for JPEG 2000, based on the concept of ternary arithmetic coders employing a forbidden symbol. Such coders introduce a controlled degree of redundancy during the encoding process, which can be exploited at the decoder side in order to detect and correct errors. We propose a maximum likelihood and a maximum a posteriori context-based decoder, specifically tailored to the JPEG 2000 arithmetic coder, which are able to carry out both hard and soft decoding of a corrupted code-stream. The proposed decoder extends the JPEG 2000 capabilities in error-prone scenarios, without violating the standard syntax. Extensive simulations on video sequences show that the proposed decoders largely outperform the standard in terms of PSNR and visual quality.


Assuntos
Redes de Comunicação de Computadores , Gráficos por Computador , Compressão de Dados/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Sinais Assistido por Computador , Gravação em Vídeo/métodos , Algoritmos , Compressão de Dados/normas , Aumento da Imagem/normas , Interpretação de Imagem Assistida por Computador/normas , Fotografação/métodos , Fotografação/normas , Viés de Seleção , Sensibilidade e Especificidade , Gravação em Vídeo/normas
11.
IEEE Trans Image Process ; 13(6): 751-7, 2004 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-15648866

RESUMO

We present hybrid loss protection as a new channel coding and packetization scheme for image transmission over nonprioritized lossy packet networks. The scheme employs an interleaver-based structure, and attempts to maximize the expected peak signal-to-noise ratio (PSNR) at the receiver given the constraint that the probability of failure, i.e., the probability that the PSNR of the decoded image is below a given threshold, is upper-bounded by a user-defined value. A new code-allocation algorithm is proposed, which employs Gilbert-Elliot modeling of the network statistics. Experimental results are provided in the case of transmission of images encoded by SPIHT and JPEG 2000 over a wireline, as well as a wireless UMTS-based Internet connection.


Assuntos
Algoritmos , Redes de Comunicação de Computadores , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Garantia da Qualidade dos Cuidados de Saúde/métodos , Controle de Qualidade , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
12.
IEEE Trans Image Process ; 11(6): 596-604, 2002.
Artigo em Inglês | MEDLINE | ID: mdl-18244658

RESUMO

This paper deals with the design and implementation of an image transform coding algorithm based on the integer wavelet transform (IWT). First of all, criteria are proposed for the selection of optimal factorizations of the wavelet filter polyphase matrix to be employed within the lifting scheme. The obtained results lead to the IWT implementations with very satisfactory lossless and lossy compression performance. Then, the effects of finite precision representation of the lifting coefficients on the compression performance are analyzed, showing that, in most cases, a very small number of bits can be employed for the mantissa keeping the performance degradation very limited. Stemming from these results, a VLSI architecture is proposed for the IWT implementation, capable of achieving very high frame rates with moderate gate complexity.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA