Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 45
Filtrar
1.
Proc Natl Acad Sci U S A ; 120(49): e2314542120, 2023 Dec 05.
Artigo em Inglês | MEDLINE | ID: mdl-38015849

RESUMO

High-resolution imaging with compositional and chemical sensitivity is crucial for a wide range of scientific and engineering disciplines. Although synchrotron X-ray imaging through spectromicroscopy has been tremendously successful and broadly applied, it encounters challenges in achieving enhanced detection sensitivity, satisfactory spatial resolution, and high experimental throughput simultaneously. In this work, based on structured illumination, we develop a single-pixel X-ray imaging approach coupled with a generative image reconstruction model for mapping the compositional heterogeneity with nanoscale resolvability. This method integrates a full-field transmission X-ray microscope with an X-ray fluorescence detector and eliminates the need for nanoscale X-ray focusing and raster scanning. We experimentally demonstrate the effectiveness of our approach by imaging a battery sample composed of mixed cathode materials and successfully retrieving the compositional variations of the imaged cathode particles. Bridging the gap between structural and chemical characterizations using X-rays, this technique opens up vast opportunities in the fields of biology, environmental, and materials science, especially for radiation-sensitive samples.

2.
Opt Express ; 32(4): 5567-5581, 2024 Feb 12.
Artigo em Inglês | MEDLINE | ID: mdl-38439279

RESUMO

We propose a polarization sensitive terahertz time-domain spectrometer that can record orthogonally polarized terahertz fields simultaneously, using fibre-coupled photoconductive antennas and a scheme that modulated the emitter's polarization. The s and p channels of the multi-pixel terahertz emitter were modulated at different frequencies, thereby allowing orthogonal waveforms to be demultiplexed from the recorded signal in post-processing. The performance of the multi-pixel emitter used in this multiplexing scheme was comparable to that of a commercial single-polarization H-dipole antenna. The approach allowed two orthogonally polarized terahertz pulses to be recorded with good signal to noise (>1000:1) within half a second. We verified the capability of the spectrometer by characterizing a birefringent crystal and by imaging a polarization-sensitive metamaterial. This work has significant potential to improve the speed of terahertz polarization sensitive applications, such as ellipsometry and imaging.

3.
J Opt Soc Am A Opt Image Sci Vis ; 40(1): 85-95, 2023 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-36607078

RESUMO

Single-frame off-axis holographic reconstruction is promising for quantitative phase imaging. However, reconstruction accuracy and contrast are degraded by noise, frequency spectrum overlap of the interferogram, severe phase distortion, etc. In this work, we propose an iterative single-frame complex wave retrieval based on an explicit model of object and reference waves. We also develop a phase restoration algorithm that does not resort to phase unwrapping. Both simulation and real experiments demonstrate higher accuracy and robustness compared to state-of-the-art methods, for both complex wave estimation and phase reconstruction. Importantly, the allowed bandwidth for the object wave is significantly improved in realistic experimental conditions (similar amplitudes for object and reference waves), which makes it attractive for large field-of-view and high-resolution imaging applications.

4.
J Acoust Soc Am ; 152(6): 3523, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36586826

RESUMO

In this paper, we present a gridless algorithm to recover an attenuated acoustic field without knowing the range information of the source. This algorithm provides the joint estimation of horizontal wavenumbers, mode amplitudes, and acoustic attenuation. The key idea is to approximate the acoustic field in range as a finite sum of damped sinusoids, for which the sinusoidal parameters convey the ocean information of interest (e.g., wavenumber, attenuation, etc.). Using an efficient finite rate of innovation algorithm, an accurate recovery of the attenuated acoustic field can be achieved, even if the measurement noise is correlated and the range of the source is unknown. Moreover, the proposed method is able to perform joint recovery of multiple sensor data, which leads to a more robust field reconstruction. The data used here are acquired from a vertical line array at different depths measuring a moving source at several ranges. We demonstrate the performance of the proposed algorithm both in synthetic simulations and real shallow water evaluation cell experiment 1996 data.

5.
Magn Reson Med ; 82(5): 1767-1781, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31237001

RESUMO

PURPOSE: CEST is commonly used to probe the effects of chemical exchange. Although R1ρ asymmetry quantification has also been described as a promising option for detecting the effects of chemical exchanges, the existing acquisition approaches are highly susceptible to B1 RF and B0 field inhomogeneities. To address this problem, we report a new R1ρ asymmetry imaging approach, AC-iTIP, which is based on the previously reported techniques of irradiation with toggling inversion preparation (iTIP) and adiabatic continuous wave constant amplitude spin-lock RF pulses (ACCSL). We also derived the optimal spin-lock RF pulse B1 amplitude that yielded the greatest R1ρ asymmetry. METHODS: Bloch-McConnell simulations were used to verify the analytical formula derived for the optimal spin-lock RF pulse B1 amplitude. The performance of the AC-iTIP approach was compared to that of the iTIP approach based on hard RF pulses and the R1ρ -spectrum acquired using adiabatic RF pulses with the conventional fitting method. Comparisons were performed using Bloch-McConnell simulations, phantom, and in vivo experiments at 3.0T. RESULTS: The analytical prediction of the optimal B1 was validated. Compared to the other 2 approaches, the AC-iTIP approach was more robust under the influences of B1 RF and B0 field inhomogeneities. A linear relationship was observed between the measured R1ρ asymmetry and the metabolite concentration. CONCLUSION: The AC-iTIP approach could probe the chemical exchange effect more robustly than the existing R1ρ asymmetry acquisition approaches. Therefore, AC-iTIP is a promising technique for metabolite imaging based on the chemical exchange effect.


Assuntos
Joelho/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Simulação por Computador , Voluntários Saudáveis , Humanos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Razão Sinal-Ruído
6.
Opt Express ; 26(20): 26120-26133, 2018 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-30469703

RESUMO

A proper estimation of realistic point-spread function (PSF) in optical microscopy can significantly improve the deconvolution performance and assist the microscope calibration process. In this work, by exemplifying 3D wide-field fluorescence microscopy, we propose an approach for estimating the spherically aberrated PSF of a microscope, directly from the observed samples. The PSF, expressed as a linear combination of 4 basis functions, is obtained directly from the acquired image by minimizing a novel criterion, which is derived from the noise statistics in the microscope. We demonstrate the effectiveness of the PSF approximation model and of our estimation method using both simulations and real experiments that were carried out on quantum dots. The principle of our PSF estimation approach is sufficiently flexible to be generalized non-spherical aberrations and other microscope modalities.

7.
J Opt Soc Am A Opt Image Sci Vis ; 34(6): 1029-1034, 2017 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-29036087

RESUMO

The point spread function (PSF) plays a fundamental role in fluorescence microscopy. A realistic and accurately calculated PSF model can significantly improve the performance in 3D deconvolution microscopy and also the localization accuracy in single-molecule microscopy. In this work, we propose a fast and accurate approximation of the Gibson-Lanni model, which has been shown to represent the PSF suitably under a variety of imaging conditions. We express the Kirchhoff's integral in this model as a linear combination of rescaled Bessel functions, thus providing an integral-free way for the calculation. The explicit approximation error in terms of parameters is given numerically. Experiments demonstrate that the proposed approach results in a significantly smaller computational time compared with current state-of-the-art techniques to achieve the same accuracy. This approach can also be extended to other microscopy PSF models.

8.
Nat Commun ; 15(1): 5411, 2024 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-38926336

RESUMO

Most rod-shaped bacteria elongate by inserting new cell wall material into the inner surface of the cell sidewall. This is performed by class A penicillin binding proteins (PBPs) and a highly conserved protein complex, the elongasome, which moves processively around the cell circumference and inserts long glycan strands that act as barrel-hoop-like reinforcing structures, thereby giving rise to a rod-shaped cell. However, it remains unclear how elongasome synthesis dynamics and termination events are regulated to determine the length of these critical cell-reinforcing structures. To address this, we developed a method to track individual elongasome complexes around the entire circumference of Bacillus subtilis cells for minutes-long periods using single-molecule fluorescence microscopy. We found that the B. subtilis elongasome is highly processive and that processive synthesis events are frequently terminated by rapid reversal or extended pauses. We found that cellular levels of RodA regulate elongasome processivity, reversal and pausing. Our single-molecule data, together with stochastic simulations, show that elongasome dynamics and processivity are regulated by molecular motor tug-of-war competition between several, likely two, oppositely oriented peptidoglycan synthesis complexes associated with the MreB filament. Altogether these results demonstrate that molecular motor tug-of-war is a key regulator of elongasome dynamics in B. subtilis, which likely also regulates the cell shape via modulation of elongasome processivity.


Assuntos
Bacillus subtilis , Proteínas de Bactérias , Parede Celular , Proteínas de Ligação às Penicilinas , Bacillus subtilis/metabolismo , Bacillus subtilis/genética , Parede Celular/metabolismo , Proteínas de Bactérias/metabolismo , Proteínas de Bactérias/genética , Proteínas de Ligação às Penicilinas/metabolismo , Proteínas de Ligação às Penicilinas/genética , Peptidoglicano/metabolismo , Peptidoglicano/biossíntese , Microscopia de Fluorescência , Imagem Individual de Molécula , Proteínas Motores Moleculares/metabolismo , Proteínas Motores Moleculares/genética
9.
J Opt Soc Am A Opt Image Sci Vis ; 30(10): 2012-20, 2013 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-24322857

RESUMO

Discretization of continuous (analog) convolution operators by direct sampling of the convolution kernel and use of fast Fourier transforms is highly efficient. However, it assumes the input and output signals are band-limited, a condition rarely met in practice, where signals have finite support or abrupt edges and sampling is nonideal. Here, we propose to approximate signals in analog, shift-invariant function spaces, which do not need to be band-limited, resulting in discrete coefficients for which we derive discrete convolution kernels that accurately model the analog convolution operator while taking into account nonideal sampling devices (such as finite fill-factor cameras). This approach retains the efficiency of direct sampling but not its limiting assumption. We propose fast forward and inverse algorithms that handle finite-length, periodic, and mirror-symmetric signals with rational sampling rates. We provide explicit convolution kernels for computing coherent wave propagation in the context of digital holography. When compared to band-limited methods in simulations, our method leads to fewer reconstruction artifacts when signals have sharp edges or when using nonideal sampling devices.

10.
Comput Biol Med ; 151(Pt A): 106295, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36423533

RESUMO

PURPOSE: Two-dimensional (2D) fast spin echo (FSE) techniques play a central role in the clinical magnetic resonance imaging (MRI) of knee joints. Moreover, three-dimensional (3D) FSE provides high-isotropic-resolution magnetic resonance (MR) images of knee joints, but it has a reduced signal-to-noise ratio compared to 2D FSE. Deep-learning denoising methods are a promising approach for denoising MR images, but they are often trained using synthetic noise due to challenges in obtaining true noise distributions for MR images. In this study, inherent true noise information from two number of excitations (2-NEX) acquisition was used to develop a deep-learning model based on residual learning of convolutional neural network (CNN), and this model was used to suppress the noise in 3D FSE MR images of knee joints. METHODS: A deep learning-based denoising method was developed. The proposed CNN used two-step residual learning over parallel transporting and residual blocks and was designed to comprehensively learn real noise features from 2-NEX training data. RESULTS: The results of an ablation study validated the network design. The new method achieved improved denoising performance of 3D FSE knee MR images compared with current state-of-the-art methods, based on the peak signal-to-noise ratio and structural similarity index measure. The improved image quality after denoising using the new method was verified by radiological evaluation. CONCLUSION: A deep CNN using the inherent spatial-varying noise information in 2-NEX acquisitions was developed. This method showed promise for clinical MRI assessments of the knee, and has potential applications for the assessment of other anatomical structures.


Assuntos
Articulação do Joelho , Imageamento por Ressonância Magnética , Humanos , Articulação do Joelho/diagnóstico por imagem , Redes Neurais de Computação , Progressão da Doença , Espectroscopia de Ressonância Magnética
11.
IEEE Trans Med Imaging ; 40(12): 3686-3697, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34242163

RESUMO

Physiological motion, such as cardiac and respiratory motion, during Magnetic Resonance (MR) image acquisition can cause image artifacts. Motion correction techniques have been proposed to compensate for these types of motion during thoracic scans, relying on accurate motion estimation from undersampled motion-resolved reconstruction. A particular interest and challenge lie in the derivation of reliable non-rigid motion fields from the undersampled motion-resolved data. Motion estimation is usually formulated in image space via diffusion, parametric-spline, or optical flow methods. However, image-based registration can be impaired by remaining aliasing artifacts due to the undersampled motion-resolved reconstruction. In this work, we describe a formalism to perform non-rigid registration directly in the sampled Fourier space, i.e. k-space. We propose a deep-learning based approach to perform fast and accurate non-rigid registration from the undersampled k-space data. The basic working principle originates from the Local All-Pass (LAP) technique, a recently introduced optical flow-based registration. The proposed LAPNet is compared against traditional and deep learning image-based registrations and tested on fully-sampled and highly-accelerated (with two undersampling strategies) 3D respiratory motion-resolved MR images in a cohort of 40 patients with suspected liver or lung metastases and 25 healthy subjects. The proposed LAPNet provided consistent and superior performance to image-based approaches throughout different sampling trajectories and acceleration factors.


Assuntos
Artefatos , Imageamento por Ressonância Magnética , Algoritmos , Coração/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Movimento (Física) , Movimento
12.
Artigo em Inglês | MEDLINE | ID: mdl-32365030

RESUMO

Computing the convolution between a 2D signal and a corresponding filter with variable orientations is a basic problem that arises in various tasks ranging from low level image processing (e.g. ridge/edge detection) to high level computer vision (e.g. pattern recognition). Through decades of research, there still lacks an efficient method for solving this problem. In this paper, we investigate this problem from the perspective of approximation by considering the following problem: what is the optimal basis for approximating all rotated versions of a given bivariate function? Surprisingly, solely minimising the L2-approximation-error leads to a rotation-covariant linear expansion, which we name Fourier-Argand representation. This representation presents two major advantages: 1) rotation-covariance of the basis, which implies a "strong steerability" - rotating by an angle α corresponds to multiplying each basis function by a complex scalar e-ikα; 2) optimality of the Fourier-Argand basis, which ensures a few number of basis functions suffice to accurately approximate complicated patterns and highly direction-selective filters. We show the relation between the Fourier-Argand representation and the Radon transform, leading to an efficient implementation of the decomposition for digital filters. We also show how to retrieve accurate orientation of local structures/patterns using a fast frequency estimation algorithm.

13.
Artigo em Inglês | MEDLINE | ID: mdl-32275596

RESUMO

Image registration is a required step in many practical applications that involve the acquisition of multiple related images. In this paper, we propose a methodology to deal with both the geometric and intensity transformations in the image registration problem. The main idea is to modify an accurate and fast elastic registration algorithm (Local All-Pass-LAP) so that it returns a parametric displacement field, and to estimate the intensity changes by fitting another parametric expression. Although we demonstrate the methodology using a low-order parametric model, our approach is highly flexible and easily allows substantially richer parametrisations, while requiring only limited extra computation cost. In addition, we propose two novel quantitative criteria to evaluate the accuracy of the alignment of two images ("salience correlation") and the number of degrees of freedom ("parsimony") of a displacement field, respectively. Experimental results on both synthetic and real images demonstrate the high accuracy and computational efficiency of our methodology. Furthermore, we demonstrate that the resulting displacement fields are more parsimonious than the ones obtained in other state-of-the-art image registration approaches.

14.
Nat Commun ; 11(1): 2535, 2020 05 21.
Artigo em Inglês | MEDLINE | ID: mdl-32439984

RESUMO

Terahertz (THz) radiation is poised to have an essential role in many imaging applications, from industrial inspections to medical diagnosis. However, commercialization is prevented by impractical and expensive THz instrumentation. Single-pixel cameras have emerged as alternatives to multi-pixel cameras due to reduced costs and superior durability. Here, by optimizing the modulation geometry and post-processing algorithms, we demonstrate the acquisition of a THz-video (32 × 32 pixels at 6 frames-per-second), shown in real-time, using a single-pixel fiber-coupled photoconductive THz detector. A laser diode with a digital micromirror device shining visible light onto silicon acts as the spatial THz modulator. We mathematically account for the temporal response of the system, reduce noise with a lock-in free carrier-wave modulation and realize quick, noise-robust image undersampling. Since our modifications do not impose intricate manufacturing, require long post-processing, nor sacrifice the time-resolving capabilities of THz-spectrometers, their greatest asset, this work has the potential to serve as a foundation for all future single-pixel THz imaging systems.

15.
J Acoust Soc Am ; 125(4): 2413-9, 2009 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-19354415

RESUMO

Modeling the acoustical process of soft biological tissue imaging and understanding the consequences of the approximations required by such modeling are key steps for accurately simulating ultrasonic scanning as well as estimating the scattering coefficient of the imaged matter. In this document, a linear solution to the inhomogeneous ultrasonic wave equation is proposed. The classical assumptions required for linearization are applied; however, no approximation is made in the mathematical development regarding density and speed of sound. This leads to an expression of the scattering term that establishes a correspondence between the signal measured by an ultrasound transducer and an intrinsic mechanical property of the imaged tissues. This expression shows that considering the scattering as a function of small variations in the density and speed of sound around their mean values along with classical assumptions in this domain is equivalent to associating the acoustical acquisition with a measure of the relative longitudinal bulk modulus. Comparison of the model proposed to Jensen's earlier model shows that it is also appropriate to perform accurate simulations of the acoustical imaging process.


Assuntos
Acústica , Modelos Teóricos , Ultrassonografia , Algoritmos , Modelos Lineares
16.
IEEE Trans Image Process ; 17(9): 1540-54, 2008 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-18701393

RESUMO

We consider the problem of optimizing the parameters of a given denoising algorithm for restoration of a signal corrupted by white Gaussian noise. To achieve this, we propose to minimize Stein's unbiased risk estimate (SURE) which provides a means of assessing the true mean-squared error (MSE) purely from the measured data without need for any knowledge about the noise-free signal. Specifically, we present a novel Monte-Carlo technique which enables the user to calculate SURE for an arbitrary denoising algorithm characterized by some specific parameter setting. Our method is a black-box approach which solely uses the response of the denoising operator to additional input noise and does not ask for any information about its functional form. This, therefore, permits the use of SURE for optimization of a wide variety of denoising algorithms. We justify our claims by presenting experimental results for SURE-based optimization of a series of popular image-denoising algorithms such as total-variation denoising, wavelet soft-thresholding, and Wiener filtering/smoothing splines. In the process, we also compare the performance of these methods. We demonstrate numerically that SURE computed using the new approach accurately predicts the true MSE for all the considered algorithms. We also show that SURE uncovers the optimal values of the parameters in all cases.


Assuntos
Algoritmos , Artefatos , Aumento da Imagem , Interpretação de Imagem Assistida por Computador/métodos , Simulação por Computador , Interpretação Estatística de Dados , Aumento da Imagem/métodos , Modelos Estatísticos , Método de Monte Carlo , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
17.
IEEE Trans Image Process ; 27(2): 1010-1025, 2018 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-29757743

RESUMO

This paper deals with the estimation of a deformation that describes the geometric transformation between two images. To solve this problem, we propose a novel framework that relies upon the brightness consistency hypothesis-a pixel's intensity is maintained throughout the transformation. Instead of assuming small distortion and linearizing the problem (e.g. via Taylor Series expansion), we propose to interpret the brightness hypothesis as an all-pass filtering relation between the two images. The key advantages of this new interpretation are that no restrictions are placed on the amplitude of the deformation or on the spatial variations of the images. Moreover, by converting the all-pass filtering to a linear forward-backward filtering relation, our solution to the estimation problem equates to solving a linear system of equations, which leads to a highly efficient implementation. Using this framework, we develop a fast algorithm that relates one image to another, on a local level, using an all-pass filter and then extracts the deformation from the filter-hence the name "Local All-Pass" (LAP) algorithm. The effectiveness of this algorithm is demonstrated on a variety of synthetic and real deformations that are found in applications, such as image registration and motion estimation. In particular, when compared with a selection of image registration algorithms, the LAP obtains very accurate results for significantly reduced computation time and is very robust to noise corruption.

18.
IEEE Trans Image Process ; 27(1): 92-105, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-28922119

RESUMO

We propose a non-iterative image deconvolution algorithm for data corrupted by Poisson or mixed Poisson-Gaussian noise. Many applications involve such a problem, ranging from astronomical to biological imaging. We parameterize the deconvolution process as a linear combination of elementary functions, termed as linear expansion of thresholds. This parameterization is then optimized by minimizing a robust estimate of the true mean squared error, the Poisson unbiased risk estimate. Each elementary function consists of a Wiener filtering followed by a pointwise thresholding of undecimated Haar wavelet coefficients. In contrast to existing approaches, the proposed algorithm merely amounts to solving a linear system of equations, which has a fast and exact solution. Simulation experiments over different types of convolution kernels and various noise levels indicate that the proposed method outperforms the state-of-the-art techniques, in terms of both restoration quality and computational complexity. Finally, we present some results on real confocal fluorescence microscopy images and demonstrate the potential applicability of the proposed method for improving the quality of these images.We propose a non-iterative image deconvolution algorithm for data corrupted by Poisson or mixed Poisson-Gaussian noise. Many applications involve such a problem, ranging from astronomical to biological imaging. We parameterize the deconvolution process as a linear combination of elementary functions, termed as linear expansion of thresholds. This parameterization is then optimized by minimizing a robust estimate of the true mean squared error, the Poisson unbiased risk estimate. Each elementary function consists of a Wiener filtering followed by a pointwise thresholding of undecimated Haar wavelet coefficients. In contrast to existing approaches, the proposed algorithm merely amounts to solving a linear system of equations, which has a fast and exact solution. Simulation experiments over different types of convolution kernels and various noise levels indicate that the proposed method outperforms the state-of-the-art techniques, in terms of both restoration quality and computational complexity. Finally, we present some results on real confocal fluorescence microscopy images and demonstrate the potential applicability of the proposed method for improving the quality of these images.

19.
IEEE Trans Image Process ; 16(11): 2778-86, 2007 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-17990754

RESUMO

We propose a new approach to image denoising, based on the image-domain minimization of an estimate of the mean squared error--Stein's unbiased risk estimate (SURE). Unlike most existing denoising algorithms, using the SURE makes it needless to hypothesize a statistical model for the noiseless image. A key point of our approach is that, although the (nonlinear) processing is performed in a transformed domain--typically, an undecimated discrete wavelet transform, but we also address nonorthonormal transforms--this minimization is performed in the image domain. Indeed, we demonstrate that, when the transform is a "tight" frame (an undecimated wavelet transform using orthonormal filters), separate subband minimization yields substantially worse results. In order for our approach to be viable, we add another principle, that the denoising process can be expressed as a linear combination of elementary denoising processes--linear expansion of thresholds (LET). Armed with the SURE and LET principles, we show that a denoising algorithm merely amounts to solving a linear system of equations which is obviously fast and efficient. Quite remarkably, the very competitive results obtained by performing a simple threshold (image-domain SURE optimized) on the undecimated Haar wavelet coefficients show that the SURE-LET principle has a huge potential.


Assuntos
Algoritmos , Artefatos , Interpretação Estatística de Dados , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Modelos Estatísticos , Simulação por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
20.
IEEE Trans Image Process ; 16(3): 593-606, 2007 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-17357721

RESUMO

This paper introduces a new approach to orthonormal wavelet image denoising. Instead of postulating a statistical model for the wavelet coefficients, we directly parametrize the denoising process as a sum of elementary nonlinear processes with unknown weights. We then minimize an estimate of the mean square error between the clean image and the denoised one. The key point is that we have at our disposal a very accurate, statistically unbiased, MSE estimate--Stein's unbiased risk estimate--that depends on the noisy image alone, not on the clean one. Like the MSE, this estimate is quadratic in the unknown weights, and its minimization amounts to solving a linear system of equations. The existence of this a priori estimate makes it unnecessary to devise a specific statistical model for the wavelet coefficients. Instead, and contrary to the custom in the literature, these coefficients are not considered random anymore. We describe an interscale orthonormal wavelet thresholding algorithm based on this new approach and show its near-optimal performance--both regarding quality and CPU requirement--by comparing it with the results of three state-of-the-art nonredundant denoising algorithms on a large set of test images. An interesting fallout of this study is the development of a new, group-delay-based, parent-child prediction in a wavelet dyadic tree.


Assuntos
Algoritmos , Artefatos , Inteligência Artificial , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA