Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 163
Filtrar
1.
Phys Med Biol ; 69(11)2024 May 14.
Artigo em Inglês | MEDLINE | ID: mdl-38640913

RESUMO

Objective. Digital breast tomosynthesis (DBT) has significantly improved the diagnosis of breast cancer due to its high sensitivity and specificity in detecting breast lesions compared to two-dimensional mammography. However, one of the primary challenges in DBT is the image blur resulting from x-ray source motion, particularly in DBT systems with a source in continuous-motion mode. This motion-induced blur can degrade the spatial resolution of DBT images, potentially affecting the visibility of subtle lesions such as microcalcifications.Approach. We addressed this issue by deriving an analytical in-plane source blur kernel for DBT images based on imaging geometry and proposing a post-processing image deblurring method with a generative diffusion model as an image prior.Main results. We showed that the source blur could be approximated by a shift-invariant kernel over the DBT slice at a given height above the detector, and we validated the accuracy of our blur kernel modeling through simulation. We also demonstrated the ability of the diffusion model to generate realistic DBT images. The proposed deblurring method successfully enhanced spatial resolution when applied to DBT images reconstructed with detector blur and correlated noise modeling.Significance. Our study demonstrated the advantages of modeling the imaging system components such as source motion blur for improving DBT image quality.


Assuntos
Mamografia , Mamografia/métodos , Humanos , Difusão , Processamento de Imagem Assistida por Computador/métodos , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/fisiopatologia , Raios X , Movimento , Feminino , Movimento (Física)
2.
Nat Commun ; 15(1): 3555, 2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38670945

RESUMO

Measuring the three-dimensional (3D) distribution of chemistry in nanoscale matter is a longstanding challenge for metrological science. The inelastic scattering events required for 3D chemical imaging are too rare, requiring high beam exposure that destroys the specimen before an experiment is completed. Even larger doses are required to achieve high resolution. Thus, chemical mapping in 3D has been unachievable except at lower resolution with the most radiation-hard materials. Here, high-resolution 3D chemical imaging is achieved near or below one-nanometer resolution in an Au-Fe3O4 metamaterial within an organic ligand matrix, Co3O4-Mn3O4 core-shell nanocrystals, and ZnS-Cu0.64S0.36 nanomaterial using fused multi-modal electron tomography. Multi-modal data fusion enables high-resolution chemical tomography often with 99% less dose by linking information encoded within both elastic (HAADF) and inelastic (EDX/EELS) signals. We thus demonstrate that sub-nanometer 3D resolution of chemistry is measurable for a broad class of geometrically and compositionally complex materials.

3.
IEEE Trans Med Imaging ; PP2024 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-38526890

RESUMO

Oscillating Steady-State Imaging (OSSI) is a recently developed fMRI acquisition method that can provide 2 to 3 times higher SNR than standard fMRI approaches. However, because the OSSI signal exhibits a nonlinear oscillation pattern, one must acquire and combine nc (e.g., 10) OSSI images to get an image that is free of oscillation for fMRI, and fully sampled acquisitions would compromise temporal resolution. To improve temporal resolution and accurately model the nonlinearity of OSSI signals, instead of using subspace models that are not well suited for the data, we build the MR physics for OSSI signal generation as a regularizer for the undersampled reconstruction. Our proposed physics-based manifold model turns the disadvantages of OSSI acquisition into advantages and enables joint reconstruction and quantification. OSSI manifold model (OSSIMM) outperforms subspace models and reconstructs high-resolution fMRI images with a factor of 12 acceleration and without spatial or temporal smoothing. Furthermore, OSSIMM can dynamically quantify important physics parameters, including R* 2 maps, with a temporal resolution of 150 ms.

4.
Magn Reson Med ; 91(5): 2104-2113, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38282253

RESUMO

PURPOSE: The aim of this study was to develop a reconstruction method that more fully models the signals and reconstructs gradient echo (GRE) images without sacrificing the signal to noise ratio and spatial resolution, compared to conventional gridding and model-based image reconstruction method. METHODS: By modeling the trajectories for every spoke and simplifying the scenario to only echo-in and echo-out mixture, the approach explicitly models the overlapping echoes. After modeling the overlapping echoes with two system matrices, we use the conjugate gradient algorithm (CG-SENSE) with the nonuniform FFT (NUFFT) to optimize the image reconstruction cost function. RESULTS: The proposed method is demonstrated in phantoms and in-vivo volunteer experiments for three-dimensional, high-resolution T2*-weighted imaging and functional MRI tasks. Compared to the gridding method, the high resolution protocol exhibits improved spatial resolution and reduced signal loss as a result of less intra-voxel dephasing. The fMRI task shows that the proposed model-based method produced images with reduced artifacts and blurring as well as more stable and prominent time courses. CONCLUSION: The proposed model-based reconstruction results shows improved spatial resolution and reduced artifacts. The fMRI task shows improved time series and activation map due to the reduced overlapping echoes and under-sampling artifacts.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Imagens de Fantasmas , Artefatos
5.
EJNMMI Phys ; 10(1): 82, 2023 Dec 13.
Artigo em Inglês | MEDLINE | ID: mdl-38091168

RESUMO

PURPOSE: 90Y SPECT-based dosimetry following radioembolization (RE) in liver malignancies is challenging due to the inherent scatter and the poor spatial resolution of bremsstrahlung SPECT. This study explores a deep-learning-based absorbed dose-rate estimation method for 90Y that mitigates the impact of poor SPECT image quality on dosimetry and the accuracy-efficiency trade-off of Monte Carlo (MC)-based scatter estimation and voxel dosimetry methods. METHODS: Our unified framework consists of three stages: convolutional neural network (CNN)-based bremsstrahlung scatter estimation, SPECT reconstruction with scatter correction (SC) and absorbed dose-rate map generation with a residual learning network (DblurDoseNet). The input to the framework is the measured SPECT projections and CT, and the output is the absorbed dose-rate map. For training and testing under realistic conditions, we generated a series of virtual patient phantom activity/density maps from post-therapy images of patients treated with 90Y-RE at our clinic. To train the scatter estimation network, we use the scatter projections for phantoms generated from MC simulation as the ground truth (GT). To train the dosimetry network, we use MC dose-rate maps generated directly from the activity/density maps of phantoms as the GT (Phantom + MC Dose). We compared performance of our framework (SPECT w/CNN SC + DblurDoseNet) and MC dosimetry (SPECT w/CNN SC + MC Dose) using normalized root mean square error (NRMSE) and normalized mean absolute error (NMAE) relative to GT. RESULTS: When testing on virtual patient phantoms, our CNN predicted scatter projections had NRMSE of 4.0% ± 0.7% on average. For the SPECT reconstruction with CNN SC, we observed a significant improvement on NRMSE (9.2% ± 1.7%), compared to reconstructions with no SC (149.5% ± 31.2%). In terms of virtual patient dose-rate estimation, SPECT w/CNN SC + DblurDoseNet had a NMAE of 8.6% ± 5.7% and 5.4% ± 4.8% in lesions and healthy livers, respectively; compared to 24.0% ± 6.1% and 17.7% ± 2.1% for SPECT w/CNN SC + MC Dose. In patient dose-rate maps, though no GT was available, we observed sharper lesion boundaries and increased lesion-to-background ratios with our framework. For a typical patient data set, the trained networks took ~ 1 s to generate the scatter estimate and ~ 20 s to generate the dose-rate map (matrix size: 512 × 512 × 194) on a single GPU (NVIDIA V100). CONCLUSION: Our deep learning framework, trained using true activity/density maps, has the potential to outperform non-learning voxel dosimetry methods such as MC that are dependent on SPECT image quality. Across comprehensive testing and evaluations on multiple targeted lesions and healthy livers in virtual patients, our proposed deep learning framework demonstrated higher (66% on average in terms of NMAE) estimation accuracy than the current "gold-standard" MC method. The enhanced computing speed with our framework without sacrificing accuracy is highly relevant for clinical dosimetry following 90Y-RE.

6.
Phys Med Biol ; 68(24)2023 Dec 13.
Artigo em Inglês | MEDLINE | ID: mdl-37988758

RESUMO

Objective. Digital breast tomosynthesis (DBT) is a quasi-three-dimensional breast imaging modality that improves breast cancer screening and diagnosis because it reduces fibroglandular tissue overlap compared with 2D mammography. However, DBT suffers from noise and blur problems that can lower the detectability of subtle signs of cancers such as microcalcifications (MCs). Our goal is to improve the image quality of DBT in terms of image noise and MC conspicuity.Approach. We proposed a model-based deep convolutional neural network (deep CNN or DCNN) regularized reconstruction (MDR) for DBT. It combined a model-based iterative reconstruction (MBIR) method that models the detector blur and correlated noise of the DBT system and the learning-based DCNN denoiser using the regularization-by-denoising framework. To facilitate the task-based image quality assessment, we also proposed two DCNN tools for image evaluation: a noise estimator (CNN-NE) trained to estimate the root-mean-square (RMS) noise of the images, and an MC classifier (CNN-MC) as a DCNN model observer to evaluate the detectability of clustered MCs in human subject DBTs.Main results. We demonstrated the efficacies of CNN-NE and CNN-MC on a set of physical phantom DBTs. The MDR method achieved low RMS noise and the highest detection area under the receiver operating characteristic curve (AUC) rankings evaluated by CNN-NE and CNN-MC among the reconstruction methods studied on an independent test set of human subject DBTs.Significance. The CNN-NE and CNN-MC may serve as a cost-effective surrogate for human observers to provide task-specific metrics for image quality comparisons. The proposed reconstruction method shows the promise of combining physics-based MBIR and learning-based DCNNs for DBT image reconstruction, which may potentially lead to lower dose and higher sensitivity and specificity for MC detection in breast cancer screening and diagnosis.


Assuntos
Neoplasias da Mama , Calcinose , Humanos , Feminino , Mamografia/métodos , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , Redes Neurais de Computação , Sensibilidade e Especificidade , Calcinose/diagnóstico por imagem
9.
IEEE Trans Comput Imaging ; 9: 43-54, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37090025

RESUMO

There is growing interest in learning Fourier domain sampling strategies (particularly for magnetic resonance imaging, MRI) using optimization approaches. For non-Cartesian sampling, the system models typically involve non-uniform fast Fourier transform (NUFFT) operations. Commonly used NUFFT algorithms contain frequency domain interpolation, which is not differentiable with respect to the sampling pattern, complicating the use of gradient methods. This paper describes an efficient and accurate approach for computing approximate gradients involving NUFFTs. Multiple numerical experiments validate the improved accuracy and efficiency of the proposed approximation. As an application to computational imaging, the NUFFT Jacobians were used to optimize non-Cartesian MRI sampling trajectories via data-driven stochastic optimization. Specifically, the sampling patterns were learned with respect to various model-based image reconstruction (MBIR) algorithms. The proposed approach enables sampling optimization for image sizes that are infeasible with standard auto-differentiation methods due to memory limits. The synergistic acquisition and reconstruction design leads to remarkably improved image quality. In fact, we show that model-based image reconstruction methods with suitably optimized imaging parameters can perform nearly as well as CNN-based methods.

10.
Magn Reson Med ; 90(2): 417-431, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37066854

RESUMO

PURPOSE: Optimizing three-dimensional (3D) k-space sampling trajectories is important for efficient MRI yet presents a challenging computational problem. This work proposes a generalized framework for optimizing 3D non-Cartesian sampling patterns via data-driven optimization. METHODS: We built a differentiable simulation model to enable gradient-based methods for sampling trajectory optimization. The algorithm can simultaneously optimize multiple properties of sampling patterns, including image quality, hardware constraints (maximum slew rate and gradient strength), reduced peripheral nerve stimulation (PNS), and parameter-weighted contrast. The proposed method can either optimize the gradient waveform (spline-based freeform optimization) or optimize properties of given sampling trajectories (such as the rotation angle of radial trajectories). Notably, the method can optimize sampling trajectories synergistically with either model-based or learning-based reconstruction methods. We proposed several strategies to alleviate the severe nonconvexity and huge computation demand posed by the large scale. The corresponding code is available as an open-source toolbox. RESULTS: We applied the optimized trajectory to multiple applications including structural and functional imaging. In the simulation studies, the image quality of a 3D kooshball trajectory was improved from 0.29 to 0.22 (NRMSE) with Stochastic optimization framework for 3D NOn-Cartesian samPling trajectorY (SNOPY) optimization. In the prospective studies, by optimizing the rotation angles of a stack-of-stars (SOS) trajectory, SNOPY reduced the NRMSE of reconstructed images from 1.19 to 0.97 compared to the best empirical method (RSOS-GR). Optimizing the gradient waveform of a rotational EPI trajectory improved participants' rating of the PNS from "strong" to "mild." CONCLUSION: SNOPY provides an efficient data-driven and optimization-based method to tailor non-Cartesian sampling trajectories.


Assuntos
Imageamento Tridimensional , Imageamento por Ressonância Magnética , Humanos , Imageamento Tridimensional/métodos , Estudos Prospectivos , Imageamento por Ressonância Magnética/métodos , Algoritmos , Rotação
11.
IEEE Trans Med Imaging ; 42(10): 2961-2973, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37104110

RESUMO

Accurate scatter estimation is important in quantitative SPECT for improving image contrast and accuracy. With a large number of photon histories, Monte-Carlo (MC) simulation can yield accurate scatter estimation, but is computationally expensive. Recent deep learning-based approaches can yield accurate scatter estimates quickly, yet full MC simulation is still required to generate scatter estimates as ground truth labels for all training data. Here we propose a physics-guided weakly supervised training framework for fast and accurate scatter estimation in quantitative SPECT by using a 100× shorter MC simulation as weak labels and enhancing them with deep neural networks. Our weakly supervised approach also allows quick fine-tuning of the trained network to any new test data for further improved performance with an additional short MC simulation (weak label) for patient-specific scatter modelling. Our method was trained with 18 XCAT phantoms with diverse anatomies / activities and then was evaluated on 6 XCAT phantoms, 4 realistic virtual patient phantoms, 1 torso phantom and 3 clinical scans from 2 patients for 177Lu SPECT with single / dual photopeaks (113, 208 keV). Our proposed weakly supervised method yielded comparable performance to the supervised counterpart in phantom experiments, but with significantly reduced computation in labeling. Our proposed method with patient-specific fine-tuning achieved more accurate scatter estimates than the supervised method in clinical scans. Our method with physics-guided weak supervision enables accurate deep scatter estimation in quantitative SPECT, while requiring much lower computation in labeling, enabling patient-specific fine-tuning capability in testing.


Assuntos
Redes Neurais de Computação , Tomografia Computadorizada de Emissão de Fóton Único , Humanos , Tomografia Computadorizada de Emissão de Fóton Único/métodos , Simulação por Computador , Tronco , Imagens de Fantasmas , Método de Monte Carlo , Espalhamento de Radiação , Processamento de Imagem Assistida por Computador/métodos
12.
IEEE Trans Radiat Plasma Med Sci ; 7(4): 410-420, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37021108

RESUMO

Training end-to-end unrolled iterative neural networks for SPECT image reconstruction requires a memory-efficient forward-backward projector for efficient backpropagation. This paper describes an open-source, high performance Julia implementation of a SPECT forward-backward projector that supports memory-efficient backpropagation with an exact adjoint. Our Julia projector uses only ~5% of the memory of an existing Matlab-based projector. We compare unrolling a CNN-regularized expectation-maximization (EM) algorithm with end-to-end training using our Julia projector with other training methods such as gradient truncation (ignoring gradients involving the projector) and sequential training, using XCAT phantoms and virtual patient (VP) phantoms generated from SIMIND Monte Carlo (MC) simulations. Simulation results with two different radionuclides (90Y and 177Lu) show that: 1) For 177Lu XCAT phantoms and 90Y VP phantoms, training unrolled EM algorithm in end-to-end fashion with our Julia projector yields the best reconstruction quality compared to other training methods and OSEM, both qualitatively and quantitatively. For VP phantoms with 177Lu radionuclide, the reconstructed images using end-to-end training are in higher quality than using sequential training and OSEM, but are comparable with using gradient truncation. We also find there exists a trade-off between computational cost and reconstruction accuracy for different training methods. End-to-end training has the highest accuracy because the correct gradient is used in backpropagation; sequential training yields worse reconstruction accuracy, but is significantly faster and uses much less memory.

13.
IEEE Trans Pattern Anal Mach Intell ; 45(4): 4915-4931, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32750839

RESUMO

Iterative neural networks (INN) are rapidly gaining attention for solving inverse problems in imaging, image processing, and computer vision. INNs combine regression NNs and an iterative model-based image reconstruction (MBIR) algorithm, often leading to both good generalization capability and outperforming reconstruction quality over existing MBIR optimization models. This paper proposes the first fast and convergent INN architecture, Momentum-Net, by generalizing a block-wise MBIR algorithm that uses momentum and majorizers with regression NNs. For fast MBIR, Momentum-Net uses momentum terms in extrapolation modules, and noniterative MBIR modules at each iteration by using majorizers, where each iteration of Momentum-Net consists of three core modules: image refining, extrapolation, and MBIR. Momentum-Net guarantees convergence to a fixed-point for general differentiable (non)convex MBIR functions (or data-fit terms) and convex feasible sets, under two asymptomatic conditions. To consider data-fit variations across training and testing samples, we also propose a regularization parameter selection scheme based on the "spectral spread" of majorization matrices. Numerical experiments for light-field photography using a focal stack and sparse-view computational tomography demonstrate that, given identical regression NN architectures, Momentum-Net significantly improves MBIR speed and accuracy over several existing INNs; it significantly improves reconstruction quality compared to a state-of-the-art MBIR method in each application.

14.
IEEE Trans Comput Imaging ; 9: 846-856, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38516350

RESUMO

Improving low-count SPECT can shorten scans and support pre-therapy theranostic imaging for dosimetry-based treatment planning, especially with radionuclides like 177Lu known for low photon yields. Conventional methods often underperform in low-count settings, highlighting the need for trained regularization in model-based image reconstruction. This paper introduces a trained regularizer for SPECT reconstruction that leverages segmentation based on CT imaging. The regularizer incorporates CT-side information via a segmentation mask from a pre-trained network (nnUNet). In this proof-of-concept study, we used patient studies with 177Lu DOTATATE to train and tested with phantom and patient datasets, simulating pre-therapy imaging conditions. Our results show that the proposed method outperforms both standard unregularized EM algorithms and conventional regularization with CT-side information. Specifically, our method achieved marked improvements in activity quantification, noise reduction, and root mean square error. The enhanced low-count SPECT approach has promising implications for theranostic imaging, post-therapy imaging, whole body SPECT, and reducing SPECT acquisition times.

15.
Appl Opt ; 61(14): 4030-4039, 2022 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-36256076

RESUMO

Image security is becoming an increasingly important issue due to advances in deep learning based image manipulations, such as deep image inpainting and deepfakes. There has been considerable work to date on detecting such image manipulations using improved algorithms, with little attention paid to the possible role that hardware advances may have for improving security. We propose to use a focal stack camera as a novel secure imaging device, to the best of our knowledge, that facilitates localizing modified regions in manipulated images. We show that applying convolutional neural network detection methods to focal stack images achieves significantly better detection accuracy compared to single image based forgery detection. This work demonstrates that focal stack images could be used as a novel secure image file format and opens up a new direction for secure imaging.

16.
IEEE Trans Med Imaging ; 41(9): 2318-2330, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35320096

RESUMO

Optimizing k-space sampling trajectories is a promising yet challenging topic for fast magnetic resonance imaging (MRI). This work proposes to optimize a reconstruction method and sampling trajectories jointly concerning image reconstruction quality in a supervised learning manner. We parameterize trajectories with quadratic B-spline kernels to reduce the number of parameters and apply multi-scale optimization, which may help to avoid sub-optimal local minima. The algorithm includes an efficient non-Cartesian unrolled neural network-based reconstruction and an accurate approximation for backpropagation through the non-uniform fast Fourier transform (NUFFT) operator to accurately reconstruct and back-propagate multi-coil non-Cartesian data. Penalties on slew rate and gradient amplitude enforce hardware constraints. Sampling and reconstruction are trained jointly using large public datasets. To correct for possible eddy-current effects introduced by the curved trajectory, we use a pencil-beam trajectory mapping technique. In both simulations and in- vivo experiments, the learned trajectory demonstrates significantly improved image quality compared to previous model-based and learning-based trajectory optimization methods for 10× acceleration factors. Though trained with neural network-based reconstruction, the proposed trajectory also leads to improved image quality with compressed sensing-based reconstruction.


Assuntos
Encéfalo , Imageamento por Ressonância Magnética , Algoritmos , Encéfalo/diagnóstico por imagem , Análise de Fourier , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos
17.
Phys Med Biol ; 67(11)2022 05 19.
Artigo em Inglês | MEDLINE | ID: mdl-35313288

RESUMO

Objective. The polychromatic nature of the x-ray spectrum in computed tomography leads to two types of artifacts in the reconstructed image: cupping in homogeneous areas and dark bands between dense parts, such as bones. This fact, together with the energy dependence of the mass attenuation coefficients of the tissues, results in erroneous values in the reconstructed image. Many post-processing correction schemes previously proposed require either knowledge of the x-ray spectrum or the heuristic selection of some parameters that have been shown to be suboptimal for correcting different slices in heterogeneous studies. In this study, we propose and validate a method to correct the beam hardening artifacts that avoids such restrictions and restores the quantitative character of the image.Approach. Our approach extends the idea of the water-linearization method. It uses a simple calibration phantom to characterize the attenuation for different soft tissue and bone combinations of the x-ray source polychromatic beam. The correction is based on the bone thickness traversed, obtained from a preliminary reconstruction. We evaluate the proposed method with simulations and real data using a phantom composed of PMMA and aluminum 6082 as materials equivalent to water and bone.Main results. Evaluation with simulated data showed a correction of the artifacts and a recovery of monochromatic values similar to that of the post-processing techniques used for comparison, while it outperformed them on real data.Significance. The proposed method corrects beam hardening artifacts and restores monochromatic attenuation values with no need of spectrum knowledge or heuristic parameter tuning, based on the previous acquisition of a very simple calibration phantom.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Artefatos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Água
18.
Med Phys ; 49(2): 1216-1230, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34882821

RESUMO

PURPOSE: Current methods for patient-specific voxel-level dosimetry in radionuclide therapy suffer from a trade-off between accuracy and computational efficiency. Monte Carlo (MC) radiation transport algorithms are considered the gold standard for voxel-level dosimetry but can be computationally expensive, whereas faster dose voxel kernel (DVK) convolution can be suboptimal in the presence of tissue heterogeneities. Furthermore, the accuracies of both these methods are limited by the spatial resolution of the reconstructed emission image. To overcome these limitations, this paper considers a single deep convolutional neural network (CNN) with residual learning (named DblurDoseNet) that learns to produce dose-rate maps while compensating for the limited resolution of SPECT images. METHODS: We trained our CNN using MC-generated dose-rate maps that directly corresponded to the true activity maps in virtual patient phantoms. Residual learning was applied such that our CNN learned only the difference between the true dose-rate map and DVK dose-rate map with density scaling. Our CNN consists of a 3D depth feature extractor followed by a 2D U-Net, where the input was 11 slices (3.3 cm) of a given Lu-177 SPECT/CT image and density map, and the output was the dose-rate map corresponding to the center slice. The CNN was trained with nine virtual patient phantoms and tested on five different phantoms plus 42 SPECT/CT scans of patients who underwent Lu-177 DOTATATE therapy. RESULTS: When testing on virtual patient phantoms, the lesion/organ mean dose-rate error and the normalized root mean square error (NRMSE) relative to the ground truth of the CNN method was consistently lower than DVK and MC, when applied to SPECT images. Compared to DVK/MC, the average improvement for the CNN in mean dose-rate error was 55%/53% and 66%/56%; and in NRMSE was 18%/17% and 10%/11% for lesion and kidney regions, respectively. Line profiles and dose-volume histograms demonstrated compensation for SPECT resolution effects in the CNN-generated dose-rate maps. The ensemble noise standard deviation, determined from multiple Poisson realizations, was improved by 21%/27% compared to DVK/MC. In patients, potential improvements from CNN dose-rate maps compared to DVK/MC were illustrated qualitatively, due to the absence of ground truth. The trained residual CNN took about 30 s on a single GPU (Tesla V100) to generate a 512 × $\; \times \;$ 512 × $\; \times \;$ 130 dose-rate map for a patient. CONCLUSION: The proposed residual CNN, trained using phantoms generated from patient images, has potential for real-time patient-specific dosimetry in clinical treatment planning due to its demonstrated improvement in accuracy, resolution, noise, and speed over the DVK/MC approaches.


Assuntos
Radiometria , Tomografia Computadorizada de Emissão de Fóton Único , Humanos , Método de Monte Carlo , Tomografia por Emissão de Pósitrons , Radioisótopos , Cintilografia
19.
Med Phys ; 49(2): 836-853, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34954845

RESUMO

PURPOSE: Deep learning (DL) is rapidly finding applications in low-dose CT image denoising. While having the potential to improve the image quality (IQ) over the filtered back projection method (FBP) and produce images quickly, performance generalizability of the data-driven DL methods is not fully understood yet. The main purpose of this work is to investigate the performance generalizability of a low-dose CT image denoising neural network in data acquired under different scan conditions, particularly relating to these three parameters: reconstruction kernel, slice thickness, and dose (noise) level. A secondary goal is to identify any underlying data property associated with the CT scan settings that might help predict the generalizability of the denoising network. METHODS: We select the residual encoder-decoder convolutional neural network (REDCNN) as an example of a low-dose CT image denoising technique in this work. To study how the network generalizes on the three imaging parameters, we grouped the CT volumes in the Low-Dose Grand Challenge (LDGC) data into three pairs of training datasets according to their imaging parameters, changing only one parameter in each pair. We trained REDCNN with them to obtain six denoising models. We test each denoising model on datasets of matching and mismatching parameters with respect to its training sets regarding dose, reconstruction kernel, and slice thickness, respectively, to evaluate the denoising performance changes. Denoising performances are evaluated on patient scans, simulated phantom scans, and physical phantom scans using IQ metrics including mean-squared error (MSE), contrast-dependent modulation transfer function (MTF), pixel-level noise power spectrum (pNPS), and low-contrast lesion detectability (LCD). RESULTS: REDCNN had larger MSE when the testing data were different from the training data in reconstruction kernel, but no significant MSE difference when varying slice thickness in the testing data. REDCNN trained with quarter-dose data had slightly worse MSE in denoising higher-dose images than that trained with mixed-dose data (17%-80%). The MTF tests showed that REDCNN trained with the two reconstruction kernels and slice thicknesses yielded images of similar image resolution. However, REDCNN trained with mixed-dose data preserved the low-contrast resolution better compared to REDCNN trained with quarter-dose data. In the pNPS test, it was found that REDCNN trained with smooth-kernel data could not remove high-frequency noise in the test data of sharp kernel, possibly because the lack of high-frequency noise in the smooth-kernel data limited the ability of the trained model in removing high-frequency noise. Finally, in the LCD test, REDCNN improved the lesion detectability over the original FBP images regardless of whether the training and testing data had matching reconstruction kernels. CONCLUSIONS: REDCNN is observed to be poorly generalizable between reconstruction kernels, more robust in denoising data of arbitrary dose levels when trained with mixed-dose data, and not highly sensitive to slice thickness. It is known that reconstruction kernel affects the in-plane pNPS shape of a CT image, whereas slice thickness and dose level do not, so it is possible that the generalizability performance of this CT image denoising network highly correlates to the pNPS similarity between the testing and training data.


Assuntos
Aprendizado Profundo , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Imagens de Fantasmas , Doses de Radiação , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X
20.
IEEE Trans Comput Imaging ; 8: 838-850, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-37065711

RESUMO

This paper discusses phase retrieval algorithms for maximum likelihood (ML) estimation from measurements following independent Poisson distributions in very low-count regimes, e.g., 0.25 photon per pixel. To maximize the log-likelihood of the Poisson ML model, we propose a modified Wirtinger flow (WF) algorithm using a step size based on the observed Fisher information. This approach eliminates all parameter tuning except the number of iterations. We also propose a novel curvature for majorize-minimize (MM) algorithms with a quadratic majorizer. We show theoretically that our proposed curvature is sharper than the curvature derived from the supremum of the second derivative of the Poisson ML cost function. We compare the proposed algorithms (WF, MM) with existing optimization methods, including WF using other step-size schemes, quasi-Newton methods such as LBFGS and alternating direction method of multipliers (ADMM) algorithms, under a variety of experimental settings. Simulation experiments with a random Gaussian matrix, a canonical DFT matrix, a masked DFT matrix and an empirical transmission matrix demonstrate the following. 1) As expected, algorithms based on the Poisson ML model consistently produce higher quality reconstructions than algorithms derived from Gaussian noise ML models when applied to low-count data. Furthermore, incorporating regularizers, such as corner-rounded anisotropic total variation (TV) that exploit the assumed properties of the latent image, can further improve the reconstruction quality. 2) For unregularized cases, our proposed WF algorithm with Fisher information for step size converges faster (in terms of cost function and PSNR vs. time) than other WF methods, e.g., WF with empirical step size, backtracking line search, and optimal step size for the Gaussian noise model; it also converges faster than the LBFGS quasi-Newton method. 3) In regularized cases, our proposed WF algorithm converges faster than WF with backtracking line search, LBFGS, MM and ADMM.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...