Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
IEEE Trans Med Imaging ; 41(7): 1625-1638, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35041598

RESUMEN

Deep Learning has become a very promising avenue for magnetic resonance image (MRI) reconstruction. In this work, we explore the potential of unrolled networks for non-Cartesian acquisition settings. We design the NC-PDNet (Non-Cartesian Primal Dual Netwok), the first density-compensated (DCp) unrolled neural network, and validate the need for its key components via an ablation study. Moreover, we conduct some generalizability experiments to test this network in out-of-distribution settings, for example training on knee data and validating on brain data. The results show that NC-PDNet outperforms baseline (U-Net, Deep image prior) models both visually and quantitatively in all settings. In particular, in the 2D multi-coil acquisition scenario, the NC-PDNet provides up to a 1.2 dB improvement in peak signal-to-noise ratio (PSNR) over baseline networks, while also allowing a gain of at least 1dB in PSNR in generalization settings. We provide the open-source implementation of NC-PDNet, and in particular the Non-uniform Fourier Transform in TensorFlow, tested on 2D multi-coil and 3D single-coil k-space data.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Relación Señal-Ruido
2.
IEEE Trans Med Imaging ; 40(9): 2306-2317, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-33929957

RESUMEN

Accelerating MRI scans is one of the principal outstanding problems in the MRI research community. Towards this goal, we hosted the second fastMRI competition targeted towards reconstructing MR images with subsampled k-space data. We provided participants with data from 7,299 clinical brain scans (de-identified via a HIPAA-compliant procedure by NYU Langone Health), holding back the fully-sampled data from 894 of these scans for challenge evaluation purposes. In contrast to the 2019 challenge, we focused our radiologist evaluations on pathological assessment in brain images. We also debuted a new Transfer track that required participants to submit models evaluated on MRI scanners from outside the training set. We received 19 submissions from eight different groups. Results showed one team scoring best in both SSIM scores and qualitative radiologist evaluations. We also performed analysis on alternative metrics to mitigate the effects of background noise and collected feedback from the participants to inform future challenges. Lastly, we identify common failure modes across the submissions, highlighting areas of need for future research in the MRI reconstruction community.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Encéfalo/diagnóstico por imagen , Humanos , Aprendizaje Automático , Neuroimagen
3.
IEEE Trans Image Process ; 18(2): 310-21, 2009 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-19131301

RESUMEN

We propose an image deconvolution algorithm when the data is contaminated by Poisson noise. The image to restore is assumed to be sparsely represented in a dictionary of waveforms such as the wavelet or curvelet transforms. Our key contributions are as follows. First, we handle the Poisson noise properly by using the Anscombe variance stabilizing transform leading to a nonlinear degradation equation with additive Gaussian noise. Second, the deconvolution problem is formulated as the minimization of a convex functional with a data-fidelity term reflecting the noise properties, and a nonsmooth sparsity-promoting penalty over the image representation coefficients (e.g., l(1) -norm). An additional term is also included in the functional to ensure positivity of the restored image. Third, a fast iterative forward-backward splitting algorithm is proposed to solve the minimization problem. We derive existence and uniqueness conditions of the solution, and establish convergence of the iterative algorithm. Finally, a GCV-based model selection procedure is proposed to objectively select the regularization parameter. Experimental results are carried out to show the striking benefits gained from taking into account the Poisson statistics of the noise. These results also suggest that using sparse-domain regularization may be tractable in many deconvolution applications with Poisson noise such as astronomy and microscopy.


Asunto(s)
Algoritmos , Artefactos , Interpretación de Imagen Asistida por Computador/métodos , Interpretación Estadística de Datos , Aumento de la Imagen/métodos , Modelos Estadísticos , Distribución de Poisson , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
4.
IEEE Trans Image Process ; 17(7): 1093-108, 2008 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-18586618

RESUMEN

In order to denoise Poisson count data, we introduce a variance stabilizing transform (VST) applied on a filtered discrete Poisson process, yielding a near Gaussian process with asymptotic constant variance. This new transform, which can be deemed as an extension of the Anscombe transform to filtered data, is simple, fast, and efficient in (very) low-count situations. We combine this VST with the filter banks of wavelets, ridgelets and curvelets, leading to multiscale VSTs (MS-VSTs) and nonlinear decomposition schemes. By doing so, the noise-contaminated coefficients of these MS-VST-modified transforms are asymptotically normally distributed with known variances. A classical hypothesis-testing framework is adopted to detect the significant coefficients, and a sparsity-driven iterative scheme reconstructs properly the final estimate. A range of examples show the power of this MS-VST approach for recovering important structures of various morphologies in (very) low-count images. These results also demonstrate that the MS-VST approach is competitive relative to many existing denoising methods.


Asunto(s)
Algoritmos , Artefactos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Simulación por Computador , Modelos Estadísticos , Distribución de Poisson , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
5.
IEEE Trans Image Process ; 16(11): 2662-74, 2007 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-17990743

RESUMEN

Over the last few years, the development of multichannel sensors motivated interest in methods for the coherent processing of multivariate data. Some specific issues have already been addressed as testified by the wide literature on the so-caIled blind source separation (BSS) problem. In this context, as clearly emphasized by previous work, it is fundamental that the sources to be retrieved present some quantitatively measurable diversity. Recently, sparsity and morphological diversity have emergedas a novel and effective source of diversity for BSS. Here, we give some new and essential insights into the use of sparsity in source separation, and we outline the essential role of morphological diversity as being a source of diversity or contrast between the sources. This paper introduces a new BSS method coined generalized morphological component analysis (GMCA) that takes advantages of both morphological diversity and sparsity, using recent sparse overcomplete or redundant signal representations. GMCA is a fast and efficient BSS method. We present arguments and a discussion supporting the convergence of the GMCA algorithm. Numerical results in multivariate image and signal processing are given illustrating the good performance of GMCA and its robustness to noise.


Asunto(s)
Algoritmos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Almacenamiento y Recuperación de la Información/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
6.
IEEE Trans Image Process ; 16(11): 2675-81, 2007 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-17990744

RESUMEN

In a recent paper, a method called morphological component analysis (MCA) has been proposed to separate the texture from the natural part in images. MCA relies on an iterative thresholding algorithm, using a threshold which decreases linearly towards zero along the iterations. This paper shows how the MCA convergence can be drastically improved using the mutual incoherence of the dictionaries associated to the different components. This modified MCA algorithm is then compared to basis pursuit, and experiments show that MCA and BP solutions are similar in terms of sparsity, as measured by the l1 norm, but MCA is much faster and gives us the possibility of handling large scale data sets.


Asunto(s)
Algoritmos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Inteligencia Artificial , Análisis de Componente Principal , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
7.
IEEE Trans Image Process ; 16(2): 297-309, 2007 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-17269625

RESUMEN

This paper describes the undecimated wavelet transform and its reconstruction. In the first part, we show the relation between two well known undecimated wavelet transforms, the standard undecimated wavelet transform and the isotropic undecimated wavelet transform. Then we present new filter banks specially designed for undecimated wavelet decompositions which have some useful properties such as being robust to ringing artifacts which appear generally in wavelet-based denoising methods. A range of examples illustrates the results.


Asunto(s)
Algoritmos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Almacenamiento y Recuperación de la Información/métodos , Procesamiento de Señales Asistido por Computador , Análisis Numérico Asistido por Computador
8.
Nature ; 445(7125): 286-90, 2007 Jan 18.
Artículo en Inglés | MEDLINE | ID: mdl-17206154

RESUMEN

Ordinary baryonic particles (such as protons and neutrons) account for only one-sixth of the total matter in the Universe. The remainder is a mysterious 'dark matter' component, which does not interact via electromagnetism and thus neither emits nor reflects light. As dark matter cannot be seen directly using traditional observations, very little is currently known about its properties. It does interact via gravity, and is most effectively probed through gravitational lensing: the deflection of light from distant galaxies by the gravitational attraction of foreground mass concentrations. This is a purely geometrical effect that is free of astrophysical assumptions and sensitive to all matter--whether baryonic or dark. Here we show high-fidelity maps of the large-scale distribution of dark matter, resolved in both angle and depth. We find a loose network of filaments, growing over time, which intersect in massive structures at the locations of clusters of galaxies. Our results are consistent with predictions of gravitationally induced structure formation, in which the initial, smooth distribution of dark matter collapses into filaments then into clusters, forming a gravitational scaffold into which gas can accumulate, and stars can be built.

9.
IEEE Trans Syst Man Cybern B Cybern ; 35(6): 1241-51, 2005 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-16366249

RESUMEN

We survey a number of applications of the wavelet transform in time series prediction. We show how multiresolution prediction can capture short-range and long-term dependencies with only a few parameters to be estimated. We then develop a new multiresolution methodology for combined noise filtering and prediction, based on an approach which is similar to the Kalman filter. Based on considerable experimental assessment, we demonstrate the powerfulness of this methodology.


Asunto(s)
Algoritmos , Modelos Estadísticos , Procesamiento de Señales Asistido por Computador , Procesos Estocásticos , Simulación por Computador , Análisis de Regresión , Factores de Tiempo
10.
IEEE Trans Image Process ; 14(10): 1570-82, 2005 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-16238062

RESUMEN

The separation of image content into semantic parts plays a vital role in applications such as compression, enhancement, restoration, and more. In recent years, several pioneering works suggested such a separation be based on variational formulation and others using independent component analysis and sparsity. This paper presents a novel method for separating images into texture and piecewise smooth (cartoon) parts, exploiting both the variational and the sparsity mechanisms. The method combines the basis pursuit denoising (BPDN) algorithm and the total-variation (TV) regularization scheme. The basic idea presented in this paper is the use of two appropriate dictionaries, one for the representation of textures and the other for the natural scene parts assumed to be piecewise smooth. Both dictionaries are chosen such that they lead to sparse representations over one type of image-content (either texture or piecewise smooth). The use of the BPDN with the two amalgamed dictionaries leads to the desired separation, along with noise removal as a by-product. As the need to choose proper dictionaries is generally hard, a TV regularization is employed to better direct the separation process and reduce ringing artifacts. We present a highly efficient numerical scheme to solve the combined optimization problem posed by our model and to show several experimental results that validate the algorithm's performance.


Asunto(s)
Algoritmos , Inteligencia Artificial , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Almacenamiento y Recuperación de la Información/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Modelos Estadísticos
11.
IEEE Trans Image Process ; 12(6): 706-17, 2003.
Artículo en Inglés | MEDLINE | ID: mdl-18237946

RESUMEN

We present in this paper a new method for contrast enhancement based on the curvelet transform. The curvelet transform represents edges better than wavelets, and is therefore well-suited for multiscale edge enhancement. We compare this approach with enhancement based on the wavelet transform, and the Multiscale Retinex. In a range of examples, we use edge detection and segmentation, among other processing applications, to provide for quantitative comparative evaluation. Our findings are that curvelet based enhancement out-performs other enhancement methods on noisy images, but on noiseless or near noiseless images curvelet based enhancement is not remarkably better than wavelet based enhancement.

12.
IEEE Trans Image Process ; 11(6): 670-84, 2002.
Artículo en Inglés | MEDLINE | ID: mdl-18244665

RESUMEN

We describe approximate digital implementations of two new mathematical transforms, namely, the ridgelet transform and the curvelet transform. Our implementations offer exact reconstruction, stability against perturbations, ease of implementation, and low computational complexity. A central tool is Fourier-domain computation of an approximate digital Radon transform. We introduce a very simple interpolation in the Fourier space which takes Cartesian samples and yields samples on a rectopolar grid, which is a pseudo-polar sampling set based on a concentric squares geometry. Despite the crudeness of our interpolation, the visual performance is surprisingly good. Our ridgelet transform applies to the Radon transform a special overcomplete wavelet pyramid whose wavelets have compact support in the frequency domain. Our curvelet transform uses our ridgelet transform as a component step, and implements curvelet subbands using a filter bank of a; trous wavelet filters. Our philosophy throughout is that transforms should be overcomplete, rather than critically sampled. We apply these digital transforms to the denoising of some standard images embedded in white noise. In the tests reported here, simple thresholding of the curvelet coefficients is very competitive with "state of the art" techniques based on wavelets, including thresholding of decimated or undecimated wavelet transforms and also including tree-based Bayesian posterior mean methods. Moreover, the curvelet reconstructions exhibit higher perceptual quality than wavelet-based reconstructions, offering visually sharper images and, in particular, higher quality recovery of edges and of faint linear and curvilinear features. Existing theory for curvelet and ridgelet transforms suggests that these new approaches can outperform wavelet methods in certain image reconstruction problems. The empirical results reported here are in encouraging agreement.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...