Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
Sensors (Basel) ; 18(11)2018 Nov 16.
Artigo em Inglês | MEDLINE | ID: mdl-30453582

RESUMO

This paper proposes a novel algorithm for image phase retrieval, i.e., for recovering complex-valued images from the amplitudes of noisy linear combinations (often the Fourier transform) of the sought complex images. The algorithm is developed using the alternating projection framework and is aimed to obtain high performance for heavily noisy (Poissonian or Gaussian) observations. The estimation of the target images is reformulated as a sparse regression, often termed sparse coding, in the complex domain. This is accomplished by learning a complex domain dictionary from the data it represents via matrix factorization with sparsity constraints on the code (i.e., the regression coefficients). Our algorithm, termed dictionary learning phase retrieval (DLPR), jointly learns the referred to dictionary and reconstructs the unknown target image. The effectiveness of DLPR is illustrated through experiments conducted on complex images, simulated and real, where it shows noticeable advantages over the state-of-the-art competitors.

2.
IEEE Trans Neural Netw Learn Syst ; 32(5): 2209-2223, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-32609616

RESUMO

Nonnegative blind source separation (nBSS) is often a challenging inverse problem, namely, when the mixing system is ill-conditioned. In this work, we focus on an important nBSS instance, known as hyperspectral unmixing (HU) in remote sensing. HU is a matrix factorization problem aimed at factoring the so-called endmember matrix, holding the material hyperspectral signatures, and the abundance matrix, holding the material fractions at each image pixel. The hyperspectral signatures are usually highly correlated, leading to a fast decay of the singular values (and, hence, high condition number) of the endmember matrix, so HU often introduces an ill-conditioned nBSS scenario. We introduce a new theoretical framework to attack such tough scenarios via the John ellipsoid (JE) in functional analysis. The idea is to identify the maximum volume ellipsoid inscribed in the data convex hull, followed by affinely mapping such ellipsoid into a Euclidean ball. By applying the same affine mapping to the data mixtures, we prove that the endmember matrix associated with the mapped data has condition number 1, the lowest possible, and that these (preconditioned) endmembers form a regular simplex. Exploiting this regular structure, we design a novel nBSS criterion with a provable identifiability guarantee and devise an algorithm to realize the criterion. Moreover, for the first time, the optimization problem for computing JE is exactly solved for a large-scale instance; our solver employs a split augmented Lagrangian shrinkage algorithm with all proximal operators solved by closed-form solutions. The competitiveness of the proposed method is illustrated by numerical simulations and real data experiments.

3.
Anal Chem ; 82(4): 1462-9, 2010 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-20095581

RESUMO

A rapid detection of the nonauthenticity of suspect tablets is a key first step in the fight against pharmaceutical counterfeiting. The chemical characterization of these tablets is the logical next step to evaluate their impact on patient health and help authorities in tracking their source. Hyperspectral unmixing of near-infrared (NIR) image data is an emerging effective technology to infer the number of compounds, their spectral signatures, and the mixing fractions in a given tablet, with a resolution of a few tens of micrometers. In a linear mixing scenario, hyperspectral vectors belong to a simplex whose vertices correspond to the spectra of the compounds present in the sample. SISAL (simplex identification via split augmented Lagrangian), MVSA (minimum volume simplex analysis), and MVES (minimum-volume enclosing simplex) are recent algorithms designed to identify the vertices of the minimum volume simplex containing the spectral vectors and the mixing fractions at each pixel (vector). This work demonstrates the usefulness of these techniques, based on minimum volume criteria, for unmixing NIR hyperspectral data of tablets. The experiments herein reported show that SISAL/MVSA and MVES largely outperform MCR-ALS (multivariate curve resolution-alternating least-squares), which is considered the state-of-the-art in spectral unmixing for analytical chemistry. These experiments are based on synthetic data (studying the effect of noise and the presence/absence of pure pixels) and on a real data set composed of NIR images of counterfeit tablets.


Assuntos
Fraude , Preparações Farmacêuticas/análise , Preparações Farmacêuticas/química , Espectrofotometria Infravermelho , Comprimidos , Fatores de Tempo
4.
IEEE Trans Cybern ; 50(10): 4469-4480, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31794410

RESUMO

Combining a high-spatial-resolution multispectral image (HR-MSI) with a low-spatial-resolution hyperspectral image (LR-HSI) has become a common way to enhance the spatial resolution of the HSI. The existing state-of-the-art LR-HSI and HR-MSI fusion methods are mostly based on the matrix factorization, where the matrix data representation may be hard to fully make use of the inherent structures of 3-D HSI. We propose a nonlocal sparse tensor factorization approach, called the NLSTF_SMBF, for the semiblind fusion of HSI and MSI. The proposed method decomposes the HSI into smaller full-band patches (FBPs), which, in turn, are factored as dictionaries of the three HSI modes and a sparse core tensor. This decomposition allows to solve the fusion problem as estimating a sparse core tensor and three dictionaries for each FBP. Similar FBPs are clustered together, and they are assumed to share the same dictionaries to make use of the nonlocal self-similarities of the HSI. For each group, we learn the dictionaries from the observed HR-MSI and LR-HSI. The corresponding sparse core tensor of each FBP is computed via tensor sparse coding. Two distinctive features of NLSTF_SMBF are that: 1) it is blind with respect to the point spread function (PSF) of the hyperspectral sensor and 2) it copes with spatially variant PSFs. The experimental results provide the evidence of the advantages of the NLSTF_SMBF method over the existing state-of-the-art methods, namely, in semiblind scenarios.

5.
Artigo em Inglês | MEDLINE | ID: mdl-31021796

RESUMO

This paper introduces a new approach to patchbased image restoration based on external datasets and importance sampling. The minimum mean squared error (MMSE) estimate of the image patches, the computation of which requires solving a multidimensional (typically intractable) integral, is approximated using samples from an external dataset. The new method, which can be interpreted as a generalization of the external non-local means (NLM), uses self-normalized importance sampling to efficiently approximate the MMSE estimates. The use of self-normalized importance sampling endows the proposed method with great flexibility, namely regarding the statistical properties of the measurement noise. The effectiveness of the proposed method is shown in a series of experiments using both generic large-scale and class-specific external datasets.

6.
Appl Opt ; 47(29): 5358-69, 2008 Oct 10.
Artigo em Inglês | MEDLINE | ID: mdl-18846177

RESUMO

The paper attacks absolute phase estimation with a two-step approach: the first step applies an adaptive local denoising scheme to the modulo-2 pi noisy phase; the second step applies a robust phase unwrapping algorithm to the denoised modulo-2 pi phase obtained in the first step. The adaptive local modulo-2 pi phase denoising is a new algorithm based on local polynomial approximations. The zero-order and the first-order approximations of the phase are calculated in sliding windows of varying size. The zero-order approximation is used for pointwise adaptive window size selection, whereas the first-order approximation is used to filter the phase in the obtained windows. For phase unwrapping, we apply the recently introduced robust (in the sense of discontinuity preserving) PUMA unwrapping algorithm [IEEE Trans. Image Process.16, 698 (2007)] to the denoised wrapped phase. Simulations give evidence that the proposed algorithm yields state-of-the-art performance, enabling strong noise attenuation while preserving image details.

7.
Artigo em Inglês | MEDLINE | ID: mdl-30222572

RESUMO

We propose a new approach to image fusion, inspired by the recent plug-and-play (PnP) framework. In PnP, a denoiser is treated as a black-box and plugged into an iterative algorithm, taking the place of the proximity operator of some convex regularizer, which is formally equivalent to a denoising operation. This approach offers flexibility and excellent performance, but convergence may be hard to analyze, as most state-of-the-art denoisers lack an explicit underlying objective function. Here, we propose using a scene-adapted denoiser (i.e., targeted to the specific scene being imaged) plugged into the iterations of the alternating direction method of multipliers (ADMM). This approach, which is a natural choice for image fusion problems, not only yields state-of-the-art results, but it also allows proving convergence of the resulting algorithm. The proposed method is tested on two different problems: hyperspectral fusion/sharpening and fusion of blurred-noisy image pairs.

8.
Artigo em Inglês | MEDLINE | ID: mdl-29994767

RESUMO

Fusing a low spatial resolution hyperspectral image (LR-HSI) with a high spatial resolution multispectral image (HR-MSI) to obtain a high spatial resolution hyperspectral image (HR-HSI) has attracted increasing interest in recent years. In this paper, we propose a coupled sparse tensor factorization (CSTF) based approach for fusing such images. In the proposed CSTF method, we consider an HR-HSI as a three-dimensional tensor and redefine the fusion problem as the estimation of a core tensor and dictionaries of the three modes. The high spatial-spectral correlations in the HR-HSI are modeled by incorporating a regularizer which promotes sparse core tensors. The estimation of the dictionaries and the core tensor are formulated as a coupled tensor factorization of the LR-HSI and of the HR-MSI. Experiments on two remotely sensed HSIs demonstrate the superiority of the proposed CSTF algorithm over current state-of-the-art HSI-MSI fusion approaches.

9.
IEEE Trans Image Process ; 16(3): 698-709, 2007 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-17357730

RESUMO

Phase unwrapping is the inference of absolute phase from modulo-2pi phase. This paper introduces a new energy minimization framework for phase unwrapping. The considered objective functions are first-order Markov random fields. We provide an exact energy minimization algorithm, whenever the corresponding clique potentials are convex, namely for the phase unwrapping classical Lp norm, with p > or = 1. Its complexity is KT (n, 3n), where K is the length of the absolute phase domain measured in 2pi units and T (n, m) is the complexity of a max-flow computation in a graph with n nodes and m edges. For nonconvex clique potentials, often used owing to their discontinuity preserving ability, we face an NP-hard problem for which we devise an approximate solution. Both algorithms solve integer optimization problems by computing a sequence of binary optimizations, each one solved by graph cut techniques. Accordingly, we name the two algorithms PUMA, for phase unwrappping max-flow/min-cut. A set of experimental results illustrates the effectiveness of the proposed approach and its competitiveness in comparison with state-of-the-art phase unwrapping algorithms.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
10.
IEEE Trans Image Process ; 16(12): 2992-3004, 2007 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-18092598

RESUMO

Iterative shrinkage/thresholding (IST) algorithms have been recently proposed to handle a class of convex unconstrained optimization problems arising in image restoration and other linear inverse problems. This class of problems results from combining a linear observation model with a nonquadratic regularizer (e.g., total variation or wavelet-based regularization). It happens that the convergence rate of these IST algorithms depends heavily on the linear observation operator, becoming very slow when this operator is ill-conditioned or ill-posed. In this paper, we introduce two-step IST (TwIST) algorithms, exhibiting much faster convergence rate than IST for ill-conditioned problems. For a vast class of nonquadratic convex regularizers (l(p) norms, some Besov norms, and total variation), we show that TwIST converges to a minimizer of the objective function, for a given range of values of its parameters. For noninvertible observation operators, we introduce a monotonic version of TwIST (MTwIST); although the convergence proof does not apply to this scenario, we give experimental evidence that MTwIST exhibits similar speed gains over IST. The effectiveness of the new methods are experimentally confirmed on problems of image deconvolution and of restoration with missing samples.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
11.
IEEE Trans Image Process ; 16(12): 2980-91, 2007 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-18092597

RESUMO

Standard formulations of image/signal deconvolution under wavelet-based priors/regularizers lead to very high-dimensional optimization problems involving the following difficulties: the non-Gaussian (heavy-tailed) wavelet priors lead to objective functions which are nonquadratic, usually nondifferentiable, and sometimes even nonconvex; the presence of the convolution operator destroys the separability which underlies the simplicity of wavelet-based denoising. This paper presents a unified view of several recently proposed algorithms for handling this class of optimization problems, placing them in a common majorization-minimization (MM) framework. One of the classes of algorithms considered (when using quadratic bounds on nondifferentiable log-priors) shares the infamous "singularity issue" (SI) of "iteratively reweighted least squares" (IRLS) algorithms: the possibility of having to handle infinite weights, which may cause both numerical and convergence issues. In this paper, we prove several new results which strongly support the claim that the SI does not compromise the usefulness of this class of algorithms. Exploiting the unified MM perspective, we introduce a new algorithm, resulting from using l1 bounds for nonconvex regularizers; the experiments confirm the superior performance of this method, when compared to the one based on quadratic majorization. Finally, an experimental comparison of the several algorithms, reveals their relative merits for different standard types of scenarios.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Armazenamento e Recuperação da Informação/métodos , Análise Numérica Assistida por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
12.
Appl Spectrosc ; 71(6): 1148-1156, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-27852875

RESUMO

The monitoring of biopharmaceutical products using Fourier transform infrared (FT-IR) spectroscopy relies on calibration techniques involving the acquisition of spectra of bioprocess samples along the process. The most commonly used method for that purpose is partial least squares (PLS) regression, under the assumption that a linear model is valid. Despite being successful in the presence of small nonlinearities, linear methods may fail in the presence of strong nonlinearities. This paper studies the potential usefulness of nonlinear regression methods for predicting, from in situ near-infrared (NIR) and mid-infrared (MIR) spectra acquired in high-throughput mode, biomass and plasmid concentrations in Escherichia coli DH5-α cultures producing the plasmid model pVAX-LacZ. The linear methods PLS and ridge regression (RR) are compared with their kernel (nonlinear) versions, kPLS and kRR, as well as with the (also nonlinear) relevance vector machine (RVM) and Gaussian process regression (GPR). For the systems studied, RR provided better predictive performances compared to the remaining methods. Moreover, the results point to further investigation based on larger data sets whenever differences in predictive accuracy between a linear method and its kernelized version could not be found. The use of nonlinear methods, however, shall be judged regarding the additional computational cost required to tune their additional parameters, especially when the less computationally demanding linear methods herein studied are able to successfully monitor the variables under study.


Assuntos
Reatores Biológicos , Dinâmica não Linear , Plasmídeos , Espectroscopia de Infravermelho com Transformada de Fourier , Biomassa , Escherichia coli/genética , Escherichia coli/metabolismo , Plasmídeos/genética , Plasmídeos/metabolismo
13.
IEEE Trans Image Process ; 15(4): 937-51, 2006 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-16579380

RESUMO

Image deconvolution is formulated in the wavelet domain under the Bayesian framework. The well-known sparsity of the wavelet coefficients of real-world images is modeled by heavy-tailed priors belonging to the Gaussian scale mixture (GSM) class; i.e., priors given by a linear (finite of infinite) combination of Gaussian densities. This class includes, among others, the generalized Gaussian, the Jeffreys, and the Gaussian mixture priors. Necessary and sufficient conditions are stated under which the prior induced by a thresholding/shrinking denoising rule is a GSM. This result is then used to show that the prior induced by the "nonnegative garrote" thresholding/shrinking rule, herein termed the garrote prior, is a GSM. To compute the maximum a posteriori estimate, we propose a new generalized expectation maximization (GEM) algorithm, where the missing variables are the scale factors of the GSM densities. The maximization step of the underlying expectation maximization algorithm is replaced with a linear stationary second-order iterative method. The result is a GEM algorithm of O(N log N) computational complexity. In a series of benchmark tests, the proposed approach outperforms or performs similarly to state-of-the art methods, demanding comparable (in some cases, much less) computational complexity.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Processamento de Sinais Assistido por Computador , Teorema de Bayes , Simulação por Computador , Modelos Estatísticos , Análise Numérica Assistida por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
14.
IEEE Trans Image Process ; 25(10): 4565-79, 2016 10.
Artigo em Inglês | MEDLINE | ID: mdl-27416597

RESUMO

This paper presents three hyperspectral mixture models jointly with Bayesian algorithms for supervised hyperspectral unmixing. Based on the residual component analysis model, the proposed general formulation assumes the linear model to be corrupted by an additive term whose expression can be adapted to account for nonlinearities (NLs), endmember variability (EV), or mismodeling effects (MEs). The NL effect is introduced by considering a polynomial expression that is related to bilinear models. The proposed new formulation of EV accounts for shape and scale endmember changes while enforcing a smooth spectral/spatial variation. The ME formulation considers the effect of outliers and copes with some types of EV and NL. The known constraints on the parameter of each observation model are modeled via suitable priors. The posterior distribution associated with each Bayesian model is optimized using a coordinate descent algorithm, which allows the computation of the maximum a posteriori estimator of the unknown model parameters. The proposed mixture and Bayesian models and their estimation algorithms are validated on both synthetic and real images showing competitive results regarding the quality of the inferences and the computational complexity, when compared with the state-of-the-art algorithms.

15.
IEEE Trans Image Process ; 25(11): 5266-80, 2016 11.
Artigo em Inglês | MEDLINE | ID: mdl-27576251

RESUMO

In image deconvolution problems, the diagonalization of the underlying operators by means of the fast Fourier transform (FFT) usually yields very large speedups. When there are incomplete observations (e.g., in the case of unknown boundaries), standard deconvolution techniques normally involve non-diagonalizable operators, resulting in rather slow methods or, otherwise, use inexact convolution models, resulting in the occurrence of artifacts in the enhanced images. In this paper, we propose a new deconvolution framework for images with incomplete observations that allows us to work with diagonalized convolution operators, and therefore is very fast. We iteratively alternate the estimation of the unknown pixels and of the deconvolved image, using, e.g., an FFT-based deconvolution method. This framework is an efficient, high-quality alternative to existing methods of dealing with the image boundaries, such as edge tapering. It can be used with any fast deconvolution method. We give an example in which a state-of-the-art method that assumes periodic boundary conditions is extended, using this framework, to unknown boundary conditions. Furthermore, we propose a specific implementation of this framework, based on the alternating direction method of multipliers (ADMM). We provide a proof of convergence for the resulting algorithm, which can be seen as a "partial" ADMM, in which not all variables are dualized. We report experimental comparisons with other primal-dual methods, where the proposed one performed at the level of the state of the art. Four different kinds of applications were tested in the experiments: deconvolution, deconvolution with inpainting, superresolution, and demosaicing, all with unknown boundaries.

16.
IEEE Trans Image Process ; 25(1): 274-88, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26540685

RESUMO

Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.

17.
IEEE Trans Image Process ; 24(12): 5800-11, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26452285

RESUMO

This paper presents a new Bayesian collaborative sparse regression method for linear unmixing of hyperspectral images. Our contribution is twofold; first, we propose a new Bayesian model for structured sparse regression in which the supports of the sparse abundance vectors are a priori spatially correlated across pixels (i.e., materials are spatially organized rather than randomly distributed at a pixel level). This prior information is encoded in the model through a truncated multivariate Ising Markov random field, which also takes into consideration the facts that pixels cannot be empty (i.e., there is at least one material present in each pixel), and that different materials may exhibit different degrees of spatial regularity. Second, we propose an advanced Markov chain Monte Carlo algorithm to estimate the posterior probabilities that materials are present or absent in each pixel, and, conditionally to the maximum marginal a posteriori configuration of the support, compute the minimum mean squared error estimates of the abundance vectors. A remarkable property of this algorithm is that it self-adjusts the values of the parameters of the Markov random field, thus relieving practitioners from setting regularization parameters by cross-validation. The performance of the proposed methodology is finally demonstrated through a series of experiments with synthetic and real data and comparisons with other algorithms from the literature.

18.
IEEE Trans Neural Netw Learn Syst ; 25(10): 1894-908, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25291741

RESUMO

In this paper, we study the separation of synchronous sources (SSS) problem, which deals with the separation of sources whose phases are synchronous. This problem cannot be addressed through independent component analysis methods because synchronous sources are statistically dependent. We present a two-step algorithm, called phase locked matrix factorization (PLMF), to perform SSS. We also show that SSS is identifiable under some assumptions and that any global minimum of PLMFs cost function is a desirable solution for SSS. We extensively study the algorithm on simulated data and conclude that it can perform SSS with various numbers of sources and sensors and with various phase lags between the sources, both in the ideal (i.e., perfectly synchronous and nonnoisy) case, and with various levels of additive noise in the observed signals and of phase jitter in the sources.

19.
IEEE Trans Image Process ; 23(1): 466-77, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24144664

RESUMO

This paper presents a new method to estimate the parameters of two types of blurs, linear uniform motion (approximated by a line characterized by angle and length) and out-of-focus (modeled as a uniform disk characterized by its radius), for blind restoration of natural images. The method is based on the spectrum of the blurred images and is supported on a weak assumption, which is valid for the most natural images: the power-spectrum is approximately isotropic and has a power-law decay with the spatial frequency. We introduce two modifications to the radon transform, which allow the identification of the blur spectrum pattern of the two types of blurs above mentioned. The blur parameters are identified by fitting an appropriate function that accounts separately for the natural image spectrum and the blur frequency response. The accuracy of the proposed method is validated by simulations, and the effectiveness of the proposed method is assessed by testing the algorithm on real natural blurred images and comparing it with state-of-the-art blind deconvolution methods.


Assuntos
Algoritmos , Artefatos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Simulação por Computador , Modelos Lineares , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
20.
IEEE Trans Image Process ; 20(3): 681-95, 2011 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-20840899

RESUMO

We propose a new fast algorithm for solving one of the standard approaches to ill-posed linear inverse problems (IPLIP), where a (possibly nonsmooth) regularizer is minimized under the constraint that the solution explains the observations sufficiently well. Although the regularizer and constraint are usually convex, several particular features of these problems (huge dimensionality, nonsmoothness) preclude the use of off-the-shelf optimization tools and have stimulated a considerable amount of research. In this paper, we propose a new efficient algorithm to handle one class of constrained problems (often known as basis pursuit denoising) tailored to image recovery applications. The proposed algorithm, which belongs to the family of augmented Lagrangian methods, can be used to deal with a variety of imaging IPLIP, including deconvolution and reconstruction from compressive observations (such as MRI), using either total-variation or wavelet-based (or, more generally, frame-based) regularization. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence sufficient conditions are known; we show that these conditions are satisfied by the proposed algorithm. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is a strong contender for the state-of-the-art.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA