Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 38
Filtrar
Mais filtros

Bases de dados
Tipo de documento
Intervalo de ano de publicação
1.
Neuroimage ; 144(Pt A): 142-152, 2017 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-27639353

RESUMO

This paper deals with EEG source localization. The aim is to perform spatially coherent focal localization and recover temporal EEG waveforms, which can be useful in certain clinical applications. A new hierarchical Bayesian model is proposed with a multivariate Bernoulli Laplacian structured sparsity prior for brain activity. This distribution approximates a mixed ℓ20 pseudo norm regularization in a Bayesian framework. A partially collapsed Gibbs sampler is proposed to draw samples asymptotically distributed according to the posterior of the proposed Bayesian model. The generated samples are used to estimate the brain activity and the model hyperparameters jointly in an unsupervised framework. Two different kinds of Metropolis-Hastings moves are introduced to accelerate the convergence of the Gibbs sampler. The first move is based on multiple dipole shifts within each MCMC chain, whereas the second exploits proposals associated with different MCMC chains. Experiments with focal synthetic data shows that the proposed algorithm is more robust and has a higher recovery rate than the weighted ℓ21 mixed norm regularization. Using real data, the proposed algorithm finds sources that are spatially coherent with state of the art methods, namely a multiple sparse prior approach and the Champagne algorithm. In addition, the method estimates waveforms showing peaks at meaningful timestamps. This information can be valuable for activity spread characterization.


Assuntos
Encéfalo/fisiologia , Eletroencefalografia/métodos , Potenciais Evocados/fisiologia , Percepção Auditiva/fisiologia , Teorema de Bayes , Reconhecimento Facial/fisiologia , Humanos , Modelos Estatísticos
2.
NMR Biomed ; 29(7): 918-31, 2016 07.
Artigo em Inglês | MEDLINE | ID: mdl-27166741

RESUMO

Magnetic resonance spectroscopic imaging (MRSI) is a non-invasive technique able to provide the spatial distribution of relevant biochemical compounds commonly used as biomarkers of disease. Information provided by MRSI can be used as a valuable insight for the diagnosis, treatment and follow-up of several diseases such as cancer or neurological disorders. Obtaining accurate metabolite concentrations from in vivo MRSI signals is a crucial requirement for the clinical utility of this technique. Despite the numerous publications on the topic, accurate quantification is still a challenging problem due to the low signal-to-noise ratio of the data, overlap of spectral lines and the presence of nuisance components. We propose a novel quantification method, which alleviates these limitations by exploiting a spatio-spectral regularization scheme. In contrast to previous methods, the regularization terms are not expressed directly on the parameters being sought, but on appropriate transformed domains. In order to quantify all signals simultaneously in the MRSI grid, while introducing prior information, a fast proximal optimization algorithm is proposed. Experiments on synthetic MRSI data demonstrate that the error in the estimated metabolite concentrations is reduced by a mean of 41% with the proposed scheme. Results on in vivo brain MRSI data show the benefit of the proposed approach, which is able to fit overlapping peaks correctly and to capture metabolites that are missed by single-voxel methods due to their lower concentrations. Copyright © 2016 John Wiley & Sons, Ltd.


Assuntos
Algoritmos , Neoplasias Encefálicas/metabolismo , Encéfalo/metabolismo , Aumento da Imagem/métodos , Espectroscopia de Ressonância Magnética/métodos , Imagem Molecular/métodos , Processamento de Sinais Assistido por Computador , Biomarcadores Tumorais/metabolismo , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Razão Sinal-Ruído , Análise Espaço-Temporal
3.
Pacing Clin Electrophysiol ; 37(11): 1510-9, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25053272

RESUMO

BACKGROUND: The aim of the Endocardial T-Wave Alternans Study was to prospectively assess the presence of T-wave alternans (TWA) or beat-to-beat repolarization changes on implantable cardioverter-defibrillator (ICD)-stored electrograms (EGMs) immediately preceding the onset of spontaneous ventricular tachycardia (VT) or fibrillation (VF). METHODS: Thirty-seven VT/VF episodes were compared to 116 baseline reference EGMs from the same 57 patients. A Bayesian model was used to estimate the T-wave waveform in each cardiac beat and a set of 10 parameters was selected to segment each detected T wave. Beat-by-beat differences in each T-wave parameter were computed using the absolute value of the difference between each beat and the following one. Fisher criterion was used for determining the most discriminant T-wave parameters, then top-M ranked parameters yielding a normalized cumulative Fisher score > 95% were selected, and analysis was applied on these selected parameters. Simulated TWA EGMs were used to validate the algorithm. RESULTS: In the simulation study, TWA was detectable even in the case of the smallest simulated alternans of 25 µV. In 13 of the 37 episodes (35%) occurring in nine of 16 patients, significant larger beat-to-beat variations before arrhythmia onset were detected compared to their respective references (median one positive episode per patient). Parameters including the T-wave apex amplitude seem the more discriminant parameters. CONCLUSIONS: Detection of beat-by-beat repolarization variations in ICD-stored EGMs is feasible in a significant subset of cases and may be used for predicting the onset of ventricular arrhythmias.


Assuntos
Desfibriladores Implantáveis , Técnicas Eletrofisiológicas Cardíacas , Taquicardia Ventricular/fisiopatologia , Taquicardia Ventricular/terapia , Adulto , Idoso , Idoso de 80 Anos ou mais , Arritmias Cardíacas , Síndrome de Brugada , Doença do Sistema de Condução Cardíaco , Feminino , Sistema de Condução Cardíaco/anormalidades , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos
4.
BMC Bioinformatics ; 14: 99, 2013 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-23506672

RESUMO

BACKGROUND: This paper introduces a new constrained model and the corresponding algorithm, called unsupervised Bayesian linear unmixing (uBLU), to identify biological signatures from high dimensional assays like gene expression microarrays. The basis for uBLU is a Bayesian model for the data samples which are represented as an additive mixture of random positive gene signatures, called factors, with random positive mixing coefficients, called factor scores, that specify the relative contribution of each signature to a specific sample. The particularity of the proposed method is that uBLU constrains the factor loadings to be non-negative and the factor scores to be probability distributions over the factors. Furthermore, it also provides estimates of the number of factors. A Gibbs sampling strategy is adopted here to generate random samples according to the posterior distribution of the factors, factor scores, and number of factors. These samples are then used to estimate all the unknown parameters. RESULTS: Firstly, the proposed uBLU method is applied to several simulated datasets with known ground truth and compared with previous factor decomposition methods, such as principal component analysis (PCA), non negative matrix factorization (NMF), Bayesian factor regression modeling (BFRM), and the gradient-based algorithm for general matrix factorization (GB-GMF). Secondly, we illustrate the application of uBLU on a real time-evolving gene expression dataset from a recent viral challenge study in which individuals have been inoculated with influenza A/H3N2/Wisconsin. We show that the uBLU method significantly outperforms the other methods on the simulated and real data sets considered here. CONCLUSIONS: The results obtained on synthetic and real data illustrate the accuracy of the proposed uBLU method when compared to other factor decomposition methods from the literature (PCA, NMF, BFRM, and GB-GMF). The uBLU method identifies an inflammatory component closely associated with clinical symptom scores collected during the study. Using a constrained model allows recovery of all the inflammatory genes in a single factor.


Assuntos
Algoritmos , Perfilação da Expressão Gênica/métodos , Análise em Microsséries/métodos , Teorema de Bayes , Humanos , Vírus da Influenza A Subtipo H3N2 , Influenza Humana/genética , Influenza Humana/metabolismo , Masculino
5.
Artigo em Inglês | MEDLINE | ID: mdl-33001800

RESUMO

Ultrasound (US) image restoration from radio frequency (RF) signals is generally addressed by deconvolution techniques mitigating the effect of the system point spread function (PSF). Most of the existing methods estimate the tissue reflectivity function (TRF) from the so-called fundamental US images, based on an image model assuming the linear US wave propagation. However, several human tissues or tissues with contrast agents have a nonlinear behavior when interacting with US waves leading to harmonic images. This work takes this nonlinearity into account in the context of TRF restoration, by considering both fundamental and harmonic RF signals. Starting from two observation models (for the fundamental and harmonic images), TRF estimation is expressed as the minimization of a cost function defined as the sum of two data fidelity terms and one sparsity-based regularization stabilizing the solution. The high attenuation with a depth of harmonic echoes is integrated into the direct model that relates the observed harmonic image to the TRF. The interest of the proposed method is shown through synthetic and in vivo results and compared with other restoration methods.

6.
Artigo em Inglês | MEDLINE | ID: mdl-32142435

RESUMO

This paper introduces a new fusion method for magnetic resonance (MR) and ultrasound (US) images, which aims at combining the advantages of each modality, i.e., good contrast and signal to noise ratio for the MR image and good spatial resolution for the US image. The proposed algorithm is based on two inverse problems, performing a super-resolution of the MR image and a denoising of the US image. A polynomial function is introduced to model the relationships between the gray levels of the two modalities. The resulting inverse problem is solved using a proximal alternating linearized minimization framework. The accuracy and the interest of the fusion algorithm are shown quantitatively and qualitatively via evaluations on synthetic and experimental phantom data.

7.
Nat Commun ; 11(1): 5929, 2020 11 23.
Artigo em Inglês | MEDLINE | ID: mdl-33230217

RESUMO

Non-line-of-sight (NLOS) imaging is a rapidly growing field seeking to form images of objects outside the field of view, with potential applications in autonomous navigation, reconnaissance, and even medical imaging. The critical challenge of NLOS imaging is that diffuse reflections scatter light in all directions, resulting in weak signals and a loss of directional information. To address this problem, we propose a method for seeing around corners that derives angular resolution from vertical edges and longitudinal resolution from the temporal response to a pulsed light source. We introduce an acquisition strategy, scene response model, and reconstruction algorithm that enable the formation of 2.5-dimensional representations-a plan view plus heights-and a 180∘ field of view for large-scale scenes. Our experiments demonstrate accurate reconstructions of hidden rooms up to 3 meters in each dimension despite a small scan aperture (1.5-centimeter radius) and only 45 measurement locations.

8.
IEEE Trans Image Process ; 18(9): 2059-70, 2009 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-19493849

RESUMO

This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument.


Assuntos
Teorema de Bayes , Processamento de Imagem Assistida por Computador/métodos , Espectroscopia de Ressonância Magnética/métodos , Microscopia de Força Atômica/métodos , Algoritmos , Inteligência Artificial , Cadeias de Markov , Método de Monte Carlo , Vírus do Mosaico do Tabaco
9.
IEEE Trans Med Imaging ; 38(3): 741-752, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30235121

RESUMO

This paper introduces a robust 2-D cardiac motion estimation method. The problem is formulated as an energy minimization with an optical flow-based data fidelity term and two regularization terms imposing spatial smoothness and the sparsity of the motion field in an appropriate cardiac motion dictionary. Robustness to outliers, such as imaging artefacts and anatomical motion boundaries, is introduced using robust weighting functions for the data fidelity term as well as for the spatial and sparse regularizations. The motion fields and the weights are computed jointly using an iteratively re-weighted minimization strategy. The proposed robust approach is evaluated on synthetic data and realistic simulation sequences with available ground-truth by comparing the performance with state-of-the-art algorithms. Finally, the proposed method is validated using two sequences of in vivo images. The obtained results show the interest of the proposed approach for 2-D cardiac ultrasound imaging.


Assuntos
Ecocardiografia/métodos , Coração/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Artefatos , Simulação por Computador , Ecocardiografia Doppler , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Reprodutibilidade dos Testes , Ultrassonografia
10.
IEEE Trans Med Imaging ; 38(6): 1524-1531, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-30507496

RESUMO

Available super-resolution techniques for 3-D images are either computationally inefficient prior-knowledge-based iterative techniques or deep learning methods which require a large database of known low-resolution and high-resolution image pairs. A recently introduced tensor-factorization-based approach offers a fast solution without the use of known image pairs or strict prior assumptions. In this paper, this factorization framework is investigated for single image resolution enhancement with an offline estimate of the system point spread function. The technique is applied to 3-D cone beam computed tomography for dental image resolution enhancement. To demonstrate the efficiency of our method, it is compared to a recent state-of-the-art iterative technique using low-rank and total variation regularizations. In contrast to this comparative technique, the proposed reconstruction technique gives a 2-order-of-magnitude improvement in running time-2 min compared to 2 h for a dental volume of 282×266×392 voxels. Furthermore, it also offers slightly improved quantitative results (peak signal-to-noise ratio and segmentation quality). Another advantage of the presented technique is the low number of hyperparameters. As demonstrated in this paper, the framework is not sensitive to small changes in its parameters, proposing an ease of use.


Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Imageamento Tridimensional/métodos , Radiografia Dentária/métodos , Dente/diagnóstico por imagem , Algoritmos , Bases de Dados Factuais , Humanos
11.
Nat Commun ; 10(1): 4984, 2019 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-31676824

RESUMO

Single-photon lidar has emerged as a prime candidate technology for depth imaging through challenging environments. Until now, a major limitation has been the significant amount of time required for the analysis of the recorded data. Here we show a new computational framework for real-time three-dimensional (3D) scene reconstruction from single-photon data. By combining statistical models with highly scalable computational tools from the computer graphics community, we demonstrate 3D reconstruction of complex outdoor scenes with processing times of the order of 20 ms, where the lidar data was acquired in broad daylight from distances up to 320 metres. The proposed method can handle an unknown number of surfaces in each pixel, allowing for target detection and imaging through cluttered scenes. This enables robust, real-time target reconstruction of complex moving scenes, paving the way for single-photon lidar at video rates for practical 3D imaging applications.

12.
Artigo em Inglês | MEDLINE | ID: mdl-30507510

RESUMO

Compressive spectral imagers reduce the number of sampled pixels by coding and combining the spectral information. However, sampling compressed information with simultaneous high spatial and high spectral resolution demands expensive high-resolution sensors. This work introduces a model allowing data from high spatial/low spectral and low spatial/high spectral resolution compressive sensors to be fused. Based on this model, the compressive fusion process is formulated as an inverse problem that minimizes an objective function defined as the sum of a quadratic data fidelity term and smoothness and sparsity regularization penalties. The parameters of the different sensors are optimized and the choice of an appropriate regularization is studied in order to improve the quality of the high resolution reconstructed images. Simulation results conducted on synthetic and real data, with different CS imagers, allow the quality of the proposed fusion method to be appreciated.

13.
IEEE Trans Image Process ; 27(1): 64-77, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-28922120

RESUMO

This paper introduces a new method for cardiac motion estimation in 2-D ultrasound images. The motion estimation problem is formulated as an energy minimization, whose data fidelity term is built using the assumption that the images are corrupted by multiplicative Rayleigh noise. In addition to a classical spatial smoothness constraint, the proposed method exploits the sparse properties of the cardiac motion to regularize the solution via an appropriate dictionary learning step. The proposed method is evaluated on one data set with available ground-truth, including four sequences of highly realistic simulations. The approach is also validated on both healthy and pathological sequences of in vivo data. We evaluate the method in terms of motion estimation accuracy and strain errors and compare the performance with state-of-the-art algorithms. The results show that the proposed method gives competitive results for the considered data. Furthermore, the in vivo strain analysis demonstrates that meaningful clinical interpretation can be obtained from the estimated motion vectors.This paper introduces a new method for cardiac motion estimation in 2-D ultrasound images. The motion estimation problem is formulated as an energy minimization, whose data fidelity term is built using the assumption that the images are corrupted by multiplicative Rayleigh noise. In addition to a classical spatial smoothness constraint, the proposed method exploits the sparse properties of the cardiac motion to regularize the solution via an appropriate dictionary learning step. The proposed method is evaluated on one data set with available ground-truth, including four sequences of highly realistic simulations. The approach is also validated on both healthy and pathological sequences of in vivo data. We evaluate the method in terms of motion estimation accuracy and strain errors and compare the performance with state-of-the-art algorithms. The results show that the proposed method gives competitive results for the considered data. Furthermore, the in vivo strain analysis demonstrates that meaningful clinical interpretation can be obtained from the estimated motion vectors.

14.
IEEE Trans Image Process ; 16(7): 1796-806, 2007 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-17605378

RESUMO

This paper evaluates the potential interest of using bivariate gamma distributions for image registration and change detection. The first part of this paper studies estimators for the parameters of bivariate gamma distributions based on the maximum likelihood principle and the method of moments. The performance of both methods are compared in terms of estimated mean square errors and theoretical asymptotic variances. The mutual information is a classical similarity measure which can be used for image registration or change detection. The second part of the paper studies some properties of the mutual information for bivariate Gamma distributions. Image registration and change detection techniques based on bivariate gamma distributions are finally investigated. Simulation results conducted on synthetic and real data are very encouraging. Bivariate gamma distributions are good candidates allowing us to develop new image registration algorithms and new change detectors.


Assuntos
Algoritmos , Inteligência Artificial , Interpretação de Imagem Assistida por Computador/métodos , Movimento (Física) , Reconhecimento Automatizado de Padrão/métodos , Técnica de Subtração , Simulação por Computador , Interpretação Estatística de Dados , Aumento da Imagem/métodos , Modelos Estatísticos , Distribuições Estatísticas
15.
IEEE Trans Image Process ; 26(1): 426-438, 2017 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-27810822

RESUMO

Recent work has shown that existing powerful Bayesian hyperspectral unmixing algorithms can be significantly improved by incorporating the inherent local spatial correlations between pixel class labels via the use of Markov random fields. We here propose a new Bayesian approach to joint hyperspectral unmixing and image classification such that the previous assumption of stochastic abundance vectors is relaxed to a formulation whereby a common abundance vector is assumed for pixels in each class. This allows us to avoid stochastic reparameterizations and, instead, we propose a symmetric Dirichlet distributionmodel with adjustable parameters for the common abundance vector of each class. Inference over the proposed model is achieved via a hybrid Gibbs sampler, and in particular, simulated annealing is introduced for the label estimation in order to avoid the local-trap problem. Experiments on a synthetic image and a popular, publicly available real data set indicate the proposed model is faster than and outperforms the existing approach quantitatively and qualitatively. Moreover, for appropriate choices of the Dirichlet parameter, it is shown that the proposed approach has the capability to induce sparsity in the inferred abundance vectors. It is demonstrated that this offers increased robustness in cases where the preprocessing endmember extraction algorithms overestimate the number of active endmembers present in a given scene.

16.
Biomed Opt Express ; 8(12): 5450-5467, 2017 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-29296480

RESUMO

Detecting skin lentigo in reflectance confocal microscopy images is an important and challenging problem. This imaging modality has not yet been widely investigated for this problem and there are a few automatic processing techniques. They are mostly based on machine learning approaches and rely on numerous classical image features that lead to high computational costs given the very large resolution of these images. This paper presents a detection method with very low computational complexity that is able to identify the skin depth at which the lentigo can be detected. The proposed method performs multiresolution decomposition of the image obtained at each skin depth. The distribution of image pixels at a given depth can be approximated accurately by a generalized Gaussian distribution whose parameters depend on the decomposition scale, resulting in a very-low-dimension parameter space. SVM classifiers are then investigated to classify the scale parameter of this distribution allowing real-time detection of lentigo. The method is applied to 45 healthy and lentigo patients from a clinical study, where sensitivity of 81.4% and specificity of 83.3% are achieved. Our results show that lentigo is identifiable at depths between 50µm and 60µm, corresponding to the average location of the the dermoepidermal junction. This result is in agreement with the clinical practices that characterize the lentigo by assessing the disorganization of the dermoepidermal junction.

17.
IEEE Trans Image Process ; 25(9): 3979-90, 2016 09.
Artigo em Inglês | MEDLINE | ID: mdl-27305679

RESUMO

Hyperspectral unmixing is aimed at identifying the reference spectral signatures composing a hyperspectral image and their relative abundance fractions in each pixel. In practice, the identified signatures may vary spectrally from an image to another due to varying acquisition conditions, thus inducing possibly significant estimation errors. Against this background, the hyperspectral unmixing of several images acquired over the same area is of considerable interest. Indeed, such an analysis enables the endmembers of the scene to be tracked and the corresponding endmember variability to be characterized. Sequential endmember estimation from a set of hyperspectral images is expected to provide improved performance when compared with methods analyzing the images independently. However, the significant size of the hyperspectral data precludes the use of batch procedures to jointly estimate the mixture parameters of a sequence of hyperspectral images. Provided that each elementary component is present in at least one image of the sequence, we propose to perform an online hyperspectral unmixing accounting for temporal endmember variability. The online hyperspectral unmixing is formulated as a two-stage stochastic program, which can be solved using a stochastic approximation. The performance of the proposed method is evaluated on synthetic and real data. Finally, a comparison with independent unmixing algorithms illustrates the interest of the proposed strategy.

18.
IEEE Trans Image Process ; 25(8): 3736-50, 2016 08.
Artigo em Inglês | MEDLINE | ID: mdl-27187959

RESUMO

This paper proposes a joint segmentation and deconvolution Bayesian method for medical ultrasound (US) images. Contrary to piecewise homogeneous images, US images exhibit heavy characteristic speckle patterns correlated with the tissue structures. The generalized Gaussian distribution (GGD) has been shown to be one of the most relevant distributions for characterizing the speckle in US images. Thus, we propose a GGD-Potts model defined by a label map coupling US image segmentation and deconvolution. The Bayesian estimators of the unknown model parameters, including the US image, the label map, and all the hyperparameters are difficult to be expressed in a closed form. Thus, we investigate a Gibbs sampler to generate samples distributed according to the posterior of interest. These generated samples are finally used to compute the Bayesian estimators of the unknown parameters. The performance of the proposed Bayesian model is compared with the existing approaches via several experiments conducted on realistic synthetic data and in vivo US images.

19.
IEEE Trans Image Process ; 25(3): 1136-51, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26685243

RESUMO

Mixing phenomena in hyperspectral images depend on a variety of factors, such as the resolution of observation devices, the properties of materials, and how these materials interact with incident light in the scene. Different parametric and nonparametric models have been considered to address hyperspectral unmixing problems. The simplest one is the linear mixing model. Nevertheless, it has been recognized that the mixing phenomena can also be nonlinear. The corresponding nonlinear analysis techniques are necessarily more challenging and complex than those employed for linear unmixing. Within this context, it makes sense to detect the nonlinearly mixed pixels in an image prior to its analysis, and then employ the simplest possible unmixing technique to analyze each pixel. In this paper, we propose a technique for detecting nonlinearly mixed pixels. The detection approach is based on the comparison of the reconstruction errors using both a Gaussian process regression model and a linear regression model. The two errors are combined into a detection statistics for which a probability density function can be reasonably approximated. We also propose an iterative endmember extraction algorithm to be employed in combination with the detection algorithm. The proposed detect-then-unmix strategy, which consists of extracting endmembers, detecting nonlinearly mixed pixels and unmixing, is tested with synthetic and real images.

20.
IEEE Trans Image Process ; 25(8): 3683-97, 2016 08.
Artigo em Inglês | MEDLINE | ID: mdl-27187960

RESUMO

This paper addresses the problem of single image super-resolution (SR), which consists of recovering a high-resolution image from its blurred, decimated, and noisy version. The existing algorithms for single image SR use different strategies to handle the decimation and blurring operators. In addition to the traditional first-order gradient methods, recent techniques investigate splitting-based methods dividing the SR problem into up-sampling and deconvolution steps that can be easily solved. Instead of following this splitting strategy, we propose to deal with the decimation and blurring operators simultaneously by taking advantage of their particular properties in the frequency domain, leading to a new fast SR approach. Specifically, an analytical solution is derived and implemented efficiently for the Gaussian prior or any other regularization that can be formulated into an l2 -regularized quadratic model, i.e., an l2 - l2 optimization problem. The flexibility of the proposed SR scheme is shown through the use of various priors/regularizations, ranging from generic image priors to learning-based approaches. In the case of non-Gaussian priors, we show how the analytical solution derived from the Gaussian case can be embedded into traditional splitting frameworks, allowing the computation cost of existing algorithms to be decreased significantly. Simulation results conducted on several images with different priors illustrate the effectiveness of our fast SR approach compared with existing techniques.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA