Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
IEEE Trans Med Imaging ; PP2024 May 16.
Artículo en Inglés | MEDLINE | ID: mdl-38753483

RESUMEN

Photon-counting computed tomography (PCCT) reconstructs multiple energy-channel images to describe the same object, where there exists a strong correlation among different channel images. In addition, reconstruction of each channel image suffers photon count starving problem. To make full use of the correlation among different channel images to suppress the data noise and enhance the texture details in reconstructing each channel image, this paper proposes a tensor neural network (TNN) architecture to learn a multi-channel texture prior for PCCT reconstruction. Specifically, we first learn a spatial texture prior in each individual channel image by modeling the relationship between the center pixels and its corresponding neighbor pixels using a neural network. Then, we merge the single channel spatial texture prior into multi-channel neural network to learn the spectral local correlation information among different channel images. Since our proposed TNN is trained on a series of unpaired small spatial-spectral cubes which are extracted from one single reference multi-channel image, the local correlation in the spatial-spectral cubes is considered by TNN. To boost the TNN performance, a low-rank representation is also employed to consider the global correlation among different channel images. Finally, we integrate the learned TNN and the low-rank representation as priors into Bayesian reconstruction framework. To evaluate the performance of the proposed method, four references are considered. One is simulated images from ultra-high-resolution CT. One is spectral images from dual-energy CT. The other two are animal tissue and preclinical mouse images from a custom-made PCCT systems. Our TNN prior Bayesian reconstruction demonstrated better performance than other state-of-the-art competing algorithms, in terms of not only preserving texture feature but also suppressing image noise in each channel image.

2.
J Xray Sci Technol ; 32(2): 173-205, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38217633

RESUMEN

BACKGROUND: In recent years, deep reinforcement learning (RL) has been applied to various medical tasks and produced encouraging results. OBJECTIVE: In this paper, we demonstrate the feasibility of deep RL for denoising simulated deep-silicon photon-counting CT (PCCT) data in both full and interior scan modes. PCCT offers higher spatial and spectral resolution than conventional CT, requiring advanced denoising methods to suppress noise increase. METHODS: In this work, we apply a dueling double deep Q network (DDDQN) to denoise PCCT data for maximum contrast-to-noise ratio (CNR) and a multi-agent approach to handle data non-stationarity. RESULTS: Using our method, we obtained significant image quality improvement for single-channel scans and consistent improvement for all three channels of multichannel scans. For the single-channel interior scans, the PSNR (dB) and SSIM increased from 33.4078 and 0.9165 to 37.4167 and 0.9790 respectively. For the multichannel interior scans, the channel-wise PSNR (dB) increased from 31.2348, 30.7114, and 30.4667 to 31.6182, 30.9783, and 30.8427 respectively. Similarly, the SSIM improved from 0.9415, 0.9445, and 0.9336 to 0.9504, 0.9493, and 0.0326 respectively. CONCLUSIONS: Our results show that the RL approach improves image quality effectively, efficiently, and consistently across multiple spectral channels and has great potential in clinical applications.


Asunto(s)
Algoritmos , Silicio , Rayos X , Relación Señal-Ruido , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos
3.
ArXiv ; 2023 Oct 10.
Artículo en Inglés | MEDLINE | ID: mdl-37873003

RESUMEN

Computed tomography (CT) involves a patient's exposure to ionizing radiation. To reduce the radiation dose, we can either lower the X-ray photon count or down-sample projection views. However, either of the ways often compromises image quality. To address this challenge, here we introduce an iterative reconstruction algorithm regularized by a diffusion prior. Drawing on the exceptional imaging prowess of the denoising diffusion probabilistic model (DDPM), we merge it with a reconstruction procedure that prioritizes data fidelity. This fusion capitalizes on the merits of both techniques, delivering exceptional reconstruction results in an unsupervised framework. To further enhance the efficiency of the reconstruction process, we incorporate the Nesterov momentum acceleration technique. This enhancement facilitates superior diffusion sampling in fewer steps. As demonstrated in our experiments, our method offers a potential pathway to high-definition CT image reconstruction with minimized radiation.

4.
Sensors (Basel) ; 23(3)2023 Jan 26.
Artículo en Inglés | MEDLINE | ID: mdl-36772417

RESUMEN

Most penalized maximum likelihood methods for tomographic image reconstruction based on Bayes' law include a freely adjustable hyperparameter to balance the data fidelity term and the prior/penalty term for a specific noise-resolution tradeoff. The hyperparameter is determined empirically via a trial-and-error fashion in many applications, which then selects the optimal result from multiple iterative reconstructions. These penalized methods are not only time-consuming by their iterative nature, but also require manual adjustment. This study aims to investigate a theory-based strategy for Bayesian image reconstruction without a freely adjustable hyperparameter, to substantially save time and computational resources. The Bayesian image reconstruction problem is formulated by two probability density functions (PDFs), one for the data fidelity term and the other for the prior term. When formulating these PDFs, we introduce two parameters. While these two parameters ensure the PDFs completely describe the data and prior terms, they cannot be determined by the acquired data; thus, they are called complete but unobservable parameters. Estimating these two parameters becomes possible under the conditional expectation and maximization for the image reconstruction, given the acquired data and the PDFs. This leads to an iterative algorithm, which jointly estimates the two parameters and computes the to-be reconstructed image by maximizing a posteriori probability, denoted as joint-parameter-Bayes. In addition to the theoretical formulation, comprehensive simulation experiments are performed to analyze the stopping criterion of the iterative joint-parameter-Bayes method. Finally, given the data, an optimal reconstruction is obtained without any freely adjustable hyperparameter by satisfying the PDF condition for both the data likelihood and the prior probability, and by satisfying the stopping criterion. Moreover, the stability of joint-parameter-Bayes is investigated through factors such as initialization, the PDF specification, and renormalization in an iterative manner. Both phantom simulation and clinical patient data results show that joint-parameter-Bayes can provide comparable reconstructed image quality compared to the conventional methods, but with much less reconstruction time. To see the response of the algorithm to different types of noise, three common noise models are introduced to the simulation data, including white Gaussian noise to post-log sinogram data, Poisson-like signal-dependent noise to post-log sinogram data and Poisson noise to the pre-log transmission data. The experimental outcomes of the white Gaussian noise reveal that the two parameters estimated by the joint-parameter-Bayes method agree well with simulations. It is observed that the parameter introduced to satisfy the prior's PDF is more sensitive to stopping the iteration process for all three noise models. A stability investigation showed that the initial image by filtered back projection is very robust. Clinical patient data demonstrated the effectiveness of the proposed joint-parameter-Bayes and stopping criterion.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Humanos , Teorema de Bayes , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Simulación por Computador , Fantasmas de Imagen
5.
IEEE Trans Med Imaging ; 42(11): 3129-3139, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-34968178

RESUMEN

In our earlier study, we proposed a regional Markov random field type tissue-specific texture prior from previous full-dose computed tomography (FdCT) scan for current low-dose CT (LdCT) imaging, which showed clinical benefits through task-based evaluation. Nevertheless, two assumptions were made for early study. One assumption is that the center pixel has a linear relationship with its nearby neighbors and the other is previous FdCT scans of the same subject are available. To eliminate the two assumptions, we proposed a database assisted end-to-end LdCT reconstruction framework which includes a deep learning texture prior model and a multi-modality feature based candidate selection model. A convolutional neural network-based texture prior is proposed to eliminate the linear relationship assumption. And for scenarios in which the concerned subject has no previous FdCT scans, we propose to select one proper prior candidate from the FdCT database using multi-modality features. Features from three modalities are used including the subjects' physiological factors, the CT scan protocol, and a novel feature named Lung Mark which is deliberately proposed to reflect the z-axial property of human anatomy. Moreover, a majority vote strategy is designed to overcome the noise effect from LdCT scans. Experimental results showed the effectiveness of Lung Mark. The selection model has accuracy of 84% testing on 1,470 images from 49 subjects. The learned texture prior from FdCT database provided reconstruction comparable to the subjects having corresponding FdCT. This study demonstrated the feasibility of bringing clinically relevant textures from available FdCT database to perform Bayesian reconstruction of any current LdCT scan.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Pulmón , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Teorema de Bayes , Pulmón/diagnóstico por imagen , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X/métodos , Algoritmos
6.
IEEE Trans Med Imaging ; 39(10): 2996-3007, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-32217474

RESUMEN

Photon-counting spectral computed tomography (CT) is capable of material characterization and can improve diagnostic performance over traditional clinical CT. However, it suffers from photon count starving for each individual energy channel which may cause severe artifacts in the reconstructed images. Furthermore, since the images in different energy channels describe the same object, there are high correlations among different channels. To make full use of the inter-channel correlations and minimize the count starving effect while maintaining clinically meaningful texture information, this paper combines a region-specific texture model with a low-rank correlation descriptor as an a priori regularization to explore a superior texture preserving Bayesian reconstruction of spectral CT. Specifically, the inter-channel correlations are characterized by the low-rank representation, and the inner-channel regional textures are modeled by a texture preserving Markov random field. In other words, this paper integrates the spectral and spatial information into a unified Bayesian reconstruction framework. The widely-used Split-Bregman algorithm is employed to minimize the objective function because of the non-differentiable property of the low-rank representation. To evaluate the tissue texture preserving performance of the proposed method for each channel, three references are built for comparison: one is the traditional CT image from energy integration detection. The second one is spectral images from dual-energy CT. The third one is individual channels images from custom-made photon-counting spectral CT. As expected, the proposed method produced promising results in terms of not only preserving texture features but also suppressing image noise in each channel, comparing to existing methods of total variation (TV), low-rank TV and tensor dictionary learning, by both visual inspection and quantitative indexes of root mean square error, peak signal to noise ratio, structural similarity and feature similarity.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Algoritmos , Teorema de Bayes , Fantasmas de Imagen , Relación Señal-Ruido
7.
J Med Imaging (Bellingham) ; 7(3): 032502, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-32118093

RESUMEN

Purpose: Bayesian theory provides a sound framework for ultralow-dose computed tomography (ULdCT) image reconstruction with two terms for modeling the data statistical property and incorporating a priori knowledge for the image that is to be reconstructed. We investigate the feasibility of using a machine learning (ML) strategy, particularly the convolutional neural network (CNN), to construct a tissue-specific texture prior from previous full-dose computed tomography. Approach: Our study constructs four tissue-specific texture priors, corresponding with lung, bone, fat, and muscle, and integrates the prior with the prelog shift Poisson (SP) data property for Bayesian reconstruction of ULdCT images. The Bayesian reconstruction was implemented by an algorithm called SP-CNN-T and compared with our previous Markov random field (MRF)-based tissue-specific texture prior algorithm called SP-MRF-T. Results: In addition to conventional quantitative measures, mean squared error and peak signal-to-noise ratio, structure similarity index, feature similarity, and texture Haralick features were used to measure the performance difference between SP-CNN-T and SP-MRF-T algorithms in terms of the structure and tissue texture preservation, demonstrating the feasibility and the potential of the investigated ML approach. Conclusions: Both training performance and image reconstruction results showed the feasibility of constructing CNN texture prior model and the potential of improving the structure preservation of the nodule comparing to our previous regional tissue-specific MRF texture prior model.

8.
Vis Comput Ind Biomed Art ; 2(1): 16, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-32226923

RESUMEN

Tissue texture reflects the spatial distribution of contrasts of image voxel gray levels, i.e., the tissue heterogeneity, and has been recognized as important biomarkers in various clinical tasks. Spectral computed tomography (CT) is believed to be able to enrich tissue texture by providing different voxel contrast images using different X-ray energies. Therefore, this paper aims to address two related issues for clinical usage of spectral CT, especially the photon counting CT (PCCT): (1) texture enhancement by spectral CT image reconstruction, and (2) spectral energy enriched tissue texture for improved lesion classification. For issue (1), we recently proposed a tissue-specific texture prior in addition to low rank prior for the individual energy-channel low-count image reconstruction problems in PCCT under the Bayesian theory. Reconstruction results showed the proposed method outperforms existing methods of total variation (TV), low-rank TV and tensor dictionary learning in terms of not only preserving texture features but also suppressing image noise. For issue (2), this paper will investigate three models to incorporate the enriched texture by PCCT in accordance with three types of inputs: one is the spectral images, another is the co-occurrence matrices (CMs) extracted from the spectral images, and the third one is the Haralick features (HF) extracted from the CMs. Studies were performed on simulated photon counting data by introducing attenuation-energy response curve to the traditional CT images from energy integration detectors. Classification results showed the spectral CT enriched texture model can improve the area under the receiver operating characteristic curve (AUC) score by 7.3%, 0.42% and 3.0% for the spectral images, CMs and HFs respectively on the five-energy spectral data over the original single energy data only. The CM- and HF-inputs can achieve the best AUC of 0.934 and 0.927. This texture themed study shows the insight that incorporating clinical important prior information, e.g., tissue texture in this paper, into the medical imaging, such as the upstream image reconstruction, the downstream diagnosis, and so on, can benefit the clinical tasks.

9.
IEEE Trans Med Imaging ; 37(6): 1348-1357, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-29870364

RESUMEN

The continuous development and extensive use of computed tomography (CT) in medical practice has raised a public concern over the associated radiation dose to the patient. Reducing the radiation dose may lead to increased noise and artifacts, which can adversely affect the radiologists' judgment and confidence. Hence, advanced image reconstruction from low-dose CT data is needed to improve the diagnostic performance, which is a challenging problem due to its ill-posed nature. Over the past years, various low-dose CT methods have produced impressive results. However, most of the algorithms developed for this application, including the recently popularized deep learning techniques, aim for minimizing the mean-squared error (MSE) between a denoised CT image and the ground truth under generic penalties. Although the peak signal-to-noise ratio is improved, MSE- or weighted-MSE-based methods can compromise the visibility of important structural details after aggressive denoising. This paper introduces a new CT image denoising method based on the generative adversarial network (GAN) with Wasserstein distance and perceptual similarity. The Wasserstein distance is a key concept of the optimal transport theory and promises to improve the performance of GAN. The perceptual loss suppresses noise by comparing the perceptual features of a denoised output against those of the ground truth in an established feature space, while the GAN focuses more on migrating the data noise distribution from strong to weak statistically. Therefore, our proposed method transfers our knowledge of visual perception to the image denoising task and is capable of not only reducing the image noise level but also trying to keep the critical information at the same time. Promising results have been obtained in our experiments with clinical CT images.


Asunto(s)
Dosis de Radiación , Procesamiento de Señales Asistido por Computador , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Artefactos , Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...