Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Magn Reson Med ; 64(4): 1078-88, 2010 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-20564598

RESUMEN

The method of enforcing sparsity during magnetic resonance imaging reconstruction has been successfully applied to partially parallel imaging (PPI) techniques to reduce noise and artifact levels and hence to achieve even higher acceleration factors. However, there are two major problems in the existing sparsity-constrained PPI techniques: speed and robustness. By introducing an auxiliary variable and decomposing the original minimization problem into two subproblems that are much easier to solve, a fast and robust numerical algorithm for sparsity-constrained PPI technique is developed in this work. The specific implementation for a conventional Cartesian trajectory data set is named self-feeding Sparse Sensitivity Encoding (SENSE). The computational cost for the proposed method is two conventional SENSE reconstructions plus one spatially adaptive image denoising procedure. With reconstruction time approximately doubled, images with a much lower root mean square error (RMSE) can be achieved at high acceleration factors. Using a standard eight-channel head coil, a net acceleration factor of 5 along one dimension can be achieved with low RMSE. Furthermore, the algorithm is insensitive to the choice of parameters. This work improves the clinical applicability of SENSE at high acceleration factors.


Asunto(s)
Algoritmos , Encéfalo/anatomía & histología , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Humanos , Aumento de la Imagen/métodos , Análisis Numérico Asistido por Computador , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
2.
IEEE Trans Neural Netw Learn Syst ; 31(3): 915-926, 2020 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-31094696

RESUMEN

In this paper, we propose a robust linear discriminant analysis (RLDA) through Bhattacharyya error bound optimization. RLDA considers a nonconvex problem with the L1 -norm operation that makes it less sensitive to outliers and noise than the L2 -norm linear discriminant analysis (LDA). In addition, we extend our RLDA to a sparse model (RSLDA). Both RLDA and RSLDA can extract unbounded numbers of features and avoid the small sample size (SSS) problem, and an alternating direction method of multipliers (ADMM) is used to cope with the nonconvexity in the proposed formulations. Compared with the traditional LDA, our RLDA and RSLDA are more robust to outliers and noise, and RSLDA can obtain sparse discriminant directions. These findings are supported by experiments on artificial data sets as well as human face databases.

3.
IEEE Trans Pattern Anal Mach Intell ; 41(10): 2305-2318, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-30295612

RESUMEN

Deep neural networks (DNNs) have shown very promising results for various image restoration (IR) tasks. However, the design of network architectures remains a major challenging for achieving further improvements. While most existing DNN-based methods solve the IR problems by directly mapping low quality images to desirable high-quality images, the observation models characterizing the image degradation processes have been largely ignored. In this paper, we first propose a denoising-based IR algorithm, whose iterative steps can be computed efficiently. Then, the iterative process is unfolded into a deep neural network, which is composed of multiple denoisers modules interleaved with back-projection (BP) modules that ensure the observation consistencies. A convolutional neural network (CNN) based denoiser that can exploit the multi-scale redundancies of natural images is proposed. As such, the proposed network not only exploits the powerful denoising ability of DNNs, but also leverages the prior of the observation model. Through end-to-end training, both the denoisers and the BP modules can be jointly optimized. Experimental results on several IR tasks, e.g., image denoisig, super-resolution and deblurring show that the proposed method can lead to very competitive and often state-of-the-art results on several IR tasks, including image denoising, deblurring, and super-resolution.

4.
Med Phys ; 46(8): 3399-3413, 2019 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-31135966

RESUMEN

PURPOSE: To develop and evaluate a parallel imaging and convolutional neural network combined image reconstruction framework for low-latency and high-quality accelerated real-time MR imaging. METHODS: Conventional Parallel Imaging reconstruction resolved as gradient descent steps was compacted as network layers and interleaved with convolutional layers in a general convolutional neural network. All parameters of the network were determined during the offline training process, and applied to unseen data once learned. The proposed network was first evaluated for real-time cardiac imaging at 1.5 T and real-time abdominal imaging at 0.35 T, using threefold to fivefold retrospective undersampling for cardiac imaging and threefold retrospective undersampling for abdominal imaging. Then, prospective undersampling with fourfold acceleration was performed on cardiac imaging to compare the proposed method with standard clinically available GRAPPA method and the state-of-the-art L1-ESPIRiT method. RESULTS: Both retrospective and prospective evaluations confirmed that the proposed network was able to images with a lower noise level and reduced aliasing artifacts in comparison with the single-coil based and L1-ESPIRiT reconstructions for cardiac imaging at 1.5 T, and the GRAPPA and L1-ESPIRiT reconstructions for abdominal imaging at 0.35 T. Using the proposed method, each frame can be reconstructed in less than 100 ms, suggesting its clinical compatibility. CONCLUSION: The proposed Parallel Imaging and convolutional neural network combined reconstruction framework is a promising technique that allows low-latency and high-quality real-time MR imaging.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Corazón/diagnóstico por imagen , Humanos , Factores de Tiempo
5.
Quant Imaging Med Surg ; 9(9): 1516-1527, 2019 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-31667138

RESUMEN

BACKGROUND: To review and evaluate approaches to convolutional neural network (CNN) reconstruction for accelerated cardiac MR imaging in the real clinical context. METHODS: Two CNN architectures, Unet and residual network (Resnet) were evaluated using quantitative and qualitative assessment by radiologist. Four different loss functions were also considered: pixel-wise (L1 and L2), patch-wise structural dissimilarity (Dssim) and feature-wise (perceptual loss). The networks were evaluated using retrospectively and prospectively under-sampled cardiac MR data. RESULTS: Based on our assessments, we find that Resnet and Unet achieve similar image quality but that former requires only 100,000 parameters compared to 1.3 million parameters for the latter. The perceptual loss function performed significantly better than L1, L2 or Dssim loss functions as determined by the radiologist scores. CONCLUSIONS: CNN image reconstruction using Resnet yields comparable image quality to Unet with 10X the number of parameters. This has implications for training with significantly lower data requirements. Network training using the perceptual loss function was found to better agree with radiologist scoring compared to L1, L2 or Dssim loss functions.

6.
IEEE Trans Pattern Anal Mach Intell ; 28(9): 1519-24, 2006 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-16929737

RESUMEN

In this paper, we present the logarithmic total variation (LTV) model for face recognition under varying illumination, including natural lighting conditions, where we rarely know the strength, direction, or number of light sources. The proposed LTV model has the ability to factorize a single face image and obtain the illumination invariant facial structure, which is then used for face recognition. Our model is inspired by the SQI model but has better edge-preserving ability and simpler parameter selection. The merit of this model is that neither does it require any lighting assumption nor does it need any training. The LTV model reaches very high recognition rates in the tests using both Yale and CMU PIE face databases as well as a face database containing 765 subjects under outdoor lighting conditions.


Asunto(s)
Algoritmos , Inteligencia Artificial , Cara/anatomía & histología , Interpretación de Imagen Asistida por Computador/métodos , Iluminación , Modelos Biológicos , Reconocimiento de Normas Patrones Automatizadas/métodos , Análisis de Varianza , Simulación por Computador , Humanos , Aumento de la Imagen/métodos , Almacenamiento y Recuperación de la Información/métodos , Modelos Estadísticos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
7.
Phys Med Biol ; 58(5): 1447-64, 2013 Mar 07.
Artículo en Inglés | MEDLINE | ID: mdl-23399757

RESUMEN

The patient respiratory signal associated with the cone beam CT (CBCT) projections is important for lung cancer radiotherapy. In contrast to monitoring an external surrogate of respiration, such a signal can be extracted directly from the CBCT projections. In this paper, we propose a novel local principal component analysis (LPCA) method to extract the respiratory signal by distinguishing the respiration motion-induced content change from the gantry rotation-induced content change in the CBCT projections. The LPCA method is evaluated by comparing with three state-of-the-art projection-based methods, namely the Amsterdam Shroud method, the intensity analysis method and the Fourier-transform-based phase analysis method. The clinical CBCT projection data of eight patients, acquired under various clinical scenarios, were used to investigate the performance of each method. We found that the proposed LPCA method has demonstrated the best overall performance for cases tested and thus is a promising technique for extracting a respiratory signal. We also identified the applicability of each existing method.


Asunto(s)
Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Radiografía Torácica/métodos , Respiración , Tomografía Computarizada Cuatridimensional , Humanos , Movimiento , Análisis de Componente Principal , Rotación , Factores de Tiempo
8.
Bioinformatics ; 21(10): 2410-6, 2005 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-15728112

RESUMEN

MOTIVATION: Background correction is an important preprocess in cDNA microarray data analysis. A variety of methods have been used for this purpose. However, many kinds of backgrounds, especially inhomogeneous ones, cannot be estimated correctly using any of the existing methods. In this paper, we propose the use of the TV+L1 model, which minimizes the total variation (TV) of the image subject to an L1-fidelity term, to correct background bias. We demonstrate its advantages over the existing methods by both analytically discussing its properties and numerically comparing it with morphological opening. RESULTS: Experimental results on both synthetic data and real microarray images demonstrate that the TV+L1 model gives the restored intensity that is closer to the true data than morphological opening. As a result, this method can serve an important role in the preprocessing of cDNA microarray data.


Asunto(s)
Algoritmos , Perfilación de la Expresión Génica/métodos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Microscopía Fluorescente/métodos , Análisis de Secuencia por Matrices de Oligonucleótidos/métodos , Simulación por Computador , Hibridación Fluorescente in Situ/métodos , Modelos Estadísticos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA