Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
J Xray Sci Technol ; 31(2): 319-336, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36683486

RESUMO

BACKGROUND: Computed tomography (CT) plays an important role in the field of non-destructive testing. However, conventional CT images often have blurred edge and unclear texture, which is not conducive to the follow-up medical diagnosis and industrial testing work. OBJECTIVE: This study aims to generate high-resolution CT images using a new CT super-resolution reconstruction method combining with the sparsity regularization and deep learning prior. METHODS: The new method reconstructs CT images through a reconstruction model incorporating image gradient L0-norm minimization and deep image priors using a plug-and-play super-resolution framework. The deep learning priors are learned from a deep residual network and then plugged into the proposed new framework, and alternating direction method of multipliers is utilized to optimize the iterative solution of the model. RESULTS: The simulation data analysis results show that the new method improves the signal-to-noise ratio (PSNR) by 7% and the modulation transfer function (MTF) curves show that the value of MTF50 increases by 0.02 factors compared with the result of deep plug-and-play super-resolution. Additionally, the real CT image data analysis results show that the new method improves the PSNR by 5.1% and MTF50 by 0.11 factors. CONCLUSION: Both simulation and real data experiments prove that the proposed new CT super-resolution method using deep learning priors can reconstruct CT images with lower noise and better detail recovery. This method is flexible, effective and extensive for low-resolution CT image super-resolution.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X , Imagens de Fantasmas , Tomografia Computadorizada por Raios X/métodos , Processamento de Imagem Assistida por Computador/métodos , Simulação por Computador
2.
J Xray Sci Technol ; 29(3): 435-452, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33843720

RESUMO

OBJECTIVE: In order to solve the blurred structural details and over-smoothing effects in sparse representation dictionary learning reconstruction algorithm, this study aims to test sparse angle CT reconstruction with weighted dictionary learning algorithm based on adaptive Group-Sparsity Regularization (AGSR-SART). METHODS: First, a new similarity measure is defined in which Covariance is introduced into Euclidean distance, Non-local image patches are adaptively divided into groups of different sizes as the basic unit of sparse representation. Second, the weight factor of the regular constraint terms is designed through the residuals represented by the dictionary, so that the algorithm takes different smoothing effects on different regions of the image during the iterative process. The sparse reconstructed image is modified according to the difference between the estimated value and the intermediate image. Last, The SBI (Split Bregman Iteration) iterative algorithm is used to solve the objective function. An abdominal image, a pelvic image and a thoracic image are employed to evaluate performance of the proposed method. RESULTS: In terms of quantitative evaluations, experimental results show that new algorithm yields PSNR of 48.20, the maximum SSIM of 99.06% and the minimum MAE of 0.0028. CONCLUSIONS: This study demonstrates that new algorithm can better preserve structural details in reconstructed CT images. It eliminates the effect of excessive smoothing in sparse angle reconstruction, enhances the sparseness and non-local self-similarity of the image, and thus it is superior to several existing reconstruction algorithms.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Abdome , Tomografia Computadorizada por Raios X
3.
Biomed Eng Online ; 16(1): 32, 2017 Mar 03.
Artigo em Inglês | MEDLINE | ID: mdl-28253881

RESUMO

BACKGROUND: In diffuse optical tomography (DOT), the image reconstruction is often an ill-posed inverse problem, which is even more severe for breast DOT since there are considerably increasing unknowns to reconstruct with regard to the achievable number of measurements. One common way to address this ill-posedness is to introduce various regularization methods. There has been extensive research regarding constructing and optimizing objective functions. However, although these algorithms dramatically improved reconstruction images, few of them have designed an essentially differentiable objective function whose full gradient is easy to obtain to accelerate the optimization process. METHODS: This paper introduces a new kind of non-negative prior information, designing differentiable objective functions for cases of L1-norm, Lp (0 < p < 1)-norm and L0-norm. Incorporating this non-negative prior information, it is easy to obtain the gradient of these differentiable objective functions, which is useful to guide the optimization process. RESULTS: Performance analyses are conducted using both numerical and phantom experiments. In terms of spatial resolution, quantitativeness, gray resolution and execution time, the proposed methods perform better than the conventional regularization methods without this non-negative prior information. CONCLUSIONS: The proposed methods improves the reconstruction images using the introduced non-negative prior information. Furthermore, the non-negative constraint facilitates the gradient computation, accelerating the minimization of the objective functions.


Assuntos
Mama/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Tomografia Óptica/métodos , Algoritmos , Feminino , Humanos , Modelos Teóricos , Imagens de Fantasmas
4.
Biomed Tech (Berl) ; 69(5): 431-439, 2024 Oct 28.
Artigo em Inglês | MEDLINE | ID: mdl-38598849

RESUMO

OBJECTIVES: In the past, guided image filtering (GIF)-based methods often utilized total variation (TV)-based methods to reconstruct guidance images. And they failed to reconstruct the intricate details of complex clinical images accurately. To address these problems, we propose a new sparse-view CT reconstruction method based on group-based sparse representation using weighted guided image filtering. METHODS: In each iteration of the proposed algorithm, the result constrained by the group-based sparse representation (GSR) is used as the guidance image. Then, the weighted guided image filtering (WGIF) was used to transfer the important features from the guidance image to the reconstruction of the SART method. RESULTS: Three representative slices were tested under 64 projection views, and the proposed method yielded the best visual effect. For the shoulder case, the PSNR can achieve 48.82, which is far superior to other methods. CONCLUSIONS: The experimental results demonstrate that our method is more effective in preserving structures, suppressing noise, and reducing artifacts compared to other methods.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Processamento de Imagem Assistida por Computador/métodos , Artefatos
5.
Front Neurosci ; 16: 760298, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35495028

RESUMO

The spiking neural network (SNN) is a possible pathway for low-power and energy-efficient processing and computing exploiting spiking-driven and sparsity features of biological systems. This article proposes a sparsity-driven SNN learning algorithm, namely backpropagation with sparsity regularization (BPSR), aiming to achieve improved spiking and synaptic sparsity. Backpropagation incorporating spiking regularization is utilized to minimize the spiking firing rate with guaranteed accuracy. Backpropagation realizes the temporal information capture and extends to the spiking recurrent layer to support brain-like structure learning. The rewiring mechanism with synaptic regularization is suggested to further mitigate the redundancy of the network structure. Rewiring based on weight and gradient regulates the pruning and growth of synapses. Experimental results demonstrate that the network learned by BPSR has synaptic sparsity and is highly similar to the biological system. It not only balances the accuracy and firing rate, but also facilitates SNN learning by suppressing the information redundancy. We evaluate the proposed BPSR on the visual dataset MNIST, N-MNIST, and CIFAR10, and further test it on the sensor dataset MIT-BIH and gas sensor. Results bespeak that our algorithm achieves comparable or superior accuracy compared to related works, with sparse spikes and synapses.

6.
Neural Netw ; 118: 352-362, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31376633

RESUMO

Structured-sparsity regularization is popular for sparse learning because of its flexibility of encoding the feature structures. This paper considers a generalized version of structured-sparsity regularization (especially for l1∕l∞ norm) with arbitrary group overlap. Due to the group overlap, it is time-consuming to solve the associated proximal operator. Although Mairal et al. have proposed a network-flow algorithm to solve the proximal operator, it is still time-consuming, especially in the high-dimensional setting. To address this challenge, in this paper, we have developed a more efficient solution for l1∕l∞ group lasso with arbitrary group overlap using inexact proximal gradient method. In each iteration, our algorithm only requires to calculate an inexact solution to the proximal sub-problem, which can be done efficiently. On the theoretic side, the proposed algorithm enjoys the same global convergence rate as the exact proximal methods. Experiments demonstrate that our algorithm is much more efficient than the network-flow algorithm while retaining similar generalization performance.


Assuntos
Aprendizado de Máquina
7.
Med Phys ; 45(6): 2439-2452, 2018 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29645279

RESUMO

PURPOSE: Low-dose computed tomography (CT) imaging has been widely explored because it can reduce the radiation risk to human bodies. This presents challenges in improving the image quality because low radiation dose with reduced tube current and pulse duration introduces severe noise. In this study, we investigate block-matching sparsity regularization (BMSR) and devise an optimization problem for low-dose image reconstruction. METHOD: The objective function of the program is built by combining the sparse coding of BMSR and analysis error, which is subject to physical data measurement. A practical reconstruction algorithm using hard thresholding and projection-onto-convex-set for fast and stable performance is developed. An efficient scheme for the choices of regularization parameters is analyzed and designed. RESULTS: In the experiments, the proposed method is compared with a conventional edge preservation method and adaptive dictionary-based iterative reconstruction. Experiments with clinical images and real CT data indicate that the obtained results show promising capabilities in noise suppression and edge preservation compared with the competing methods. CONCLUSIONS: A block-matching-based reconstruction method for low-dose CT is proposed. Improvements in image quality are verified by quantitative metrics and visual comparisons, thereby indicating the potential of the proposed method for real-life applications.


Assuntos
Tomografia Computadorizada por Raios X/métodos , Algoritmos , Simulação por Computador , Cabeça/diagnóstico por imagem , Humanos , Doses de Radiação , Tórax/diagnóstico por imagem
8.
Front Neurosci ; 11: 635, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29200994

RESUMO

The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website.

9.
Philos Trans A Math Phys Eng Sci ; 373(2043)2015 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-25939620

RESUMO

We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization.


Assuntos
Algoritmos , Compressão de Dados/métodos , Modelos Estatísticos , Intensificação de Imagem Radiográfica/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Simulação por Computador , Interpretação Estatística de Dados , Humanos , Reprodutibilidade dos Testes , Tamanho da Amostra , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa