Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 33
Filter
1.
Biomed Phys Eng Express ; 9(6)2023 09 12.
Article in English | MEDLINE | ID: mdl-37604139

ABSTRACT

Electrocardiogram (ECG)-gated multi-phase computed tomography angiography (MP-CTA) is frequently used for diagnosis of coronary artery disease. Radiation dose may become a potential concern as the scan needs to cover a wide range of cardiac phases during a heart cycle. A common method to reduce radiation is to limit the full-dose acquisition to a predefined range of phases while reducing the radiation dose for the rest. Our goal in this study is to develop a spatiotemporal deep learning method to enhance the quality of low-dose CTA images at phases acquired at reduced radiation dose. Recently, we demonstrated that a deep learning method, Cycle-Consistent generative adversarial networks (CycleGAN), could effectively denoise low-dose CT images through spatial image translation without labeled image pairs in both low-dose and full-dose image domains. As CycleGAN does not utilize the temporal information in its denoising mechanism, we propose to use RecycleGAN, which could translate a series of images ordered in time from the low-dose domain to the full-dose domain through an additional recurrent network. To evaluate RecycleGAN, we use the XCAT phantom program, a highly realistic simulation tool based on real patient data, to generate MP-CTA image sequences for 18 patients (14 for training, 2 for validation and 2 for test). Our simulation results show that RecycleGAN can achieve better denoising performance than CycleGAN based on both visual inspection and quantitative metrics. We further demonstrate the superior denoising performance of RecycleGAN using clinical MP-CTA images from 50 patients.


Subject(s)
Computed Tomography Angiography , Tomography, X-Ray Computed , Humans , Heart/diagnostic imaging , Angiography , Benchmarking
2.
Appl Opt ; 62(8): 2124-2129, 2023 Mar 10.
Article in English | MEDLINE | ID: mdl-37133101

ABSTRACT

We present a snapshot imaging Mueller matrix polarimeter using modified Savart polariscopes (MSP-SIMMP). The MSP-SIMMP contains both the polarizing optics and the analyzing optics encoding all Mueller matrix components of the sample into the interferogram by the spatial modulation technique. An interference model and the methods of reconstruction and calibration are discussed. To demonstrate the feasibility of the proposed MSP-SIMMP, the numerical simulation and the laboratory experiment of a design example are presented. The remarkable advantage of the MSP-SIMMP is easy to calibrate. Moreover, compared with conventional imaging Mueller matrix polarimeters with rotating parts, the advantage of the proposed instrument is simple, compact, snapshot-enabled, and stationary (no moving parts).

3.
Appl Opt ; 62(12): 3142-3148, 2023 Apr 20.
Article in English | MEDLINE | ID: mdl-37133162

ABSTRACT

The snapshot imaging polarimeters (SIPs) using spatial modulation have gained increasing popularity due to their capability of obtaining all four Stokes parameters in a single measurement. However, the existing reference beam calibration techniques cannot extract the modulation phase factors of the spatially modulated system. In this paper, a calibration technique based on a phase-shift interference (PSI) theory is proposed to address this issue. The proposed technique can accurately extract and demodulate the modulation phase factors through measuring the reference object at different polarization analyzer orientations and performing a PSI algorithm. Using the snapshot imaging polarimeter with modified Savart polariscopes as an example, the basic principle of the proposed technique is analyzed in detail. Subsequently, the feasibility of this calibration technique was demonstrated by a numerical simulation and a laboratory experiment. This work provides a different perspective for the calibration of a spatially modulated snapshot imaging polarimeter.

4.
Biomed Phys Eng Express ; 8(6)2022 11 04.
Article in English | MEDLINE | ID: mdl-36301699

ABSTRACT

Computed tomography (CT) is widely used to diagnose many diseases. Low-dose CT has been actively pursued to lower the ionizing radiation risk. A relatively smoother kernel is typically used in low-dose CT to suppress image noise, which may sacrifice spatial resolution. In this work, we propose a texture transformer network to simultaneously reduce image noise and improve spatial resolution in CT images. This network, referred to as Texture Transformer for Super Resolution (TTSR), is a reference-based deep-learning image super-resolution method built upon a generative adversarial network (GAN). The noisy low-resolution CT (LRCT) image and the routine-dose high-resolution (HRCT) image are severed as the query and key in a transformer, respectively. Image translation is optimized through deep neural network (DNN) texture extraction, correlation embedding, and attention-based texture transfer and synthesis to achieve joint feature learning between LRCT and HRCT images for super-resolution CT (SRCT) images. To evaluate SRCT performance, we use the data from both simulations of the XCAT phantom program and the real patient data. Peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and feature similarity (FSIM) index are used as quantitative metrics. For comparison of SRCT performance, the cubic spline interpolation, SRGAN (a GAN super-resolution with an additional content loss), and GAN-CIRCLE (a GAN super-resolution with cycle consistency) were used. Compared to the other two methods, TTSR can restore more details in SRCT images and achieve better PSNR, SSIM, and FSIM for both simulation and real-patient data. In addition, we show that TTSR can yield better image quality and demand much less computation time than high-resolution low-dose CT images denoised by block-matching and 3D filtering (BM3D) and GAN-CIRCLE. In summary, the proposed TTSR method based on texture transformer and attention mechanism provides an effective and efficient tool to improve spatial resolution and suppress noise of low-dose CT images.


Subject(s)
Image Processing, Computer-Assisted , Tomography, X-Ray Computed , Humans , Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Signal-To-Noise Ratio , Neural Networks, Computer , Phantoms, Imaging
5.
J Neurosci Methods ; 365: 109389, 2022 01 01.
Article in English | MEDLINE | ID: mdl-34687797

ABSTRACT

BACKGROUND: There is growing interest in the neuroscience community in estimating and mapping microscopic properties of brain tissue non-invasively using magnetic resonance measurements. Machine learning methods are actively investigated to predict the signals measured in diffusion magnetic resonance imaging (dMRI). NEW METHOD: We applied the neural architecture search (NAS) to train a recurrent neural network to generate a multilayer perceptron to predict the dMRI data of unknown signals based on the different acquisition parameters and training data. The search space of NAS is the number of neurons in each layer of the multilayer perceptron network. To our best knowledge, this is the first time to apply NAS to solve the dMRI signal prediction problem. RESULTS: The experimental results demonstrate that the proposed NAS method can achieve fast training and predict dMRI signals accurately. For dMRI signals with four acquisition strategies of double diffusion encoding (DDE), double oscillating diffusion encoding (DODE), multi-shell and DSI-like pulsed gradient spin-echo (PGSE), the mean squared errors of the multilayer perceptron network designed by NAS are 0.0043, 0.0034, 0.0147 and 0.0199, respectively. COMPARISON WITH EXISTING METHOD(S): We also compared NAS with other machine learning prediction methods, such as support vector regression (SVR), decision tree (DT) and random forest (RF), k-nearest neighbors (KNN), adaboost regressor (AR), gradient boosting regressor (GBR) and extra-trees regressor (ET). NAS achieved the better prediction performance in most cases. CONCLUSION: In this study, NAS was developed for the prediction of dMRI signals and could become an effective prediction tool.


Subject(s)
Diffusion Magnetic Resonance Imaging , Neural Networks, Computer , Brain/diagnostic imaging , Diffusion Magnetic Resonance Imaging/methods , Machine Learning
6.
Neuroimage ; 240: 118367, 2021 10 15.
Article in English | MEDLINE | ID: mdl-34237442

ABSTRACT

Diffusion MRI (dMRI) has become an invaluable tool to assess the microstructural organization of brain tissue. Depending on the specific acquisition settings, the dMRI signal encodes specific properties of the underlying diffusion process. In the last two decades, several signal representations have been proposed to fit the dMRI signal and decode such properties. Most methods, however, are tested and developed on a limited amount of data, and their applicability to other acquisition schemes remains unknown. With this work, we aimed to shed light on the generalizability of existing dMRI signal representations to different diffusion encoding parameters and brain tissue types. To this end, we organized a community challenge - named MEMENTO, making available the same datasets for fair comparisons across algorithms and techniques. We considered two state-of-the-art diffusion datasets, including single-diffusion-encoding (SDE) spin-echo data from a human brain with over 3820 unique diffusion weightings (the MASSIVE dataset), and double (oscillating) diffusion encoding data (DDE/DODE) of a mouse brain including over 2520 unique data points. A subset of the data sampled in 5 different voxels was openly distributed, and the challenge participants were asked to predict the remaining part of the data. After one year, eight participant teams submitted a total of 80 signal fits. For each submission, we evaluated the mean squared error, the variance of the prediction error and the Bayesian information criteria. The received submissions predicted either multi-shell SDE data (37%) or DODE data (22%), followed by cartesian SDE data (19%) and DDE (18%). Most submissions predicted the signals measured with SDE remarkably well, with the exception of low and very strong diffusion weightings. The prediction of DDE and DODE data seemed more challenging, likely because none of the submissions explicitly accounted for diffusion time and frequency. Next to the choice of the model, decisions on fit procedure and hyperparameters play a major role in the prediction performance, highlighting the importance of optimizing and reporting such choices. This work is a community effort to highlight strength and limitations of the field at representing dMRI acquired with trending encoding schemes, gaining insights into how different models generalize to different tissue types and fiber configurations over a large range of diffusion encodings.


Subject(s)
Brain/diagnostic imaging , Databases, Factual , Diffusion Magnetic Resonance Imaging/methods , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Animals , Brain/physiology , Humans , Mice
7.
Biomed Phys Eng Express ; 7(5)2021 07 29.
Article in English | MEDLINE | ID: mdl-34237713

ABSTRACT

To achieve better performance for 4D multi-frame reconstruction with the parametric motion model (MF-PMM), a general simultaneous motion estimation and image reconstruction (G-SMEIR) method is proposed. In G-SMEIR, projection domain motion estimation and image domain motion estimation are performed alternatively to achieve better 4D reconstruction. This method can mitigate the local optimum trapping problem in either domain. To improve computational efficiency, the image domain motion estimation is accelerated by adapting fast convergent algorithms and graphics processing unit (GPU) computing. The proposed G-SMEIR method is tested using a cone-beam computed tomography (CBCT) simulation study of 4D XCAT phantom at different dose levels and compared with 3D total variation-based reconstruction (3D TV), 4D reconstruction with image domain motion estimation (IM4D), and SMEIR. G-SMEIR shows strong denoising capability and achieves similar performance at regular dose and half dose. The root mean squared error (RMSE) of G-SMEIR is the best among the four methods and improved about 12% over SMEIR for all respiratory phase images at full dose. G-SMEIR also achieved the best structural similarity index (SSIM) values among all methods. More importantly, G-SMEIR leads to more than 40% improvement of the mean deviation from the phantom tumor motion over SMEIR. A preliminary patient CBCT image reconstruction also shows better image quality of G-SMEIR than that of the frame-by-frame reconstruction (3D TV) and MF-PMM either using image domain motion estimation (IM4D) or using projection domain motion estimation (SMEIR) alone. G-SMEIR with a flexible combination of image domain and projection domain motion estimation provides an effective tool for 4D tomographic reconstruction.


Subject(s)
Image Processing, Computer-Assisted , Motion , Humans , Four-Dimensional Computed Tomography , Lung Neoplasms
8.
IEEE Trans Radiat Plasma Med Sci ; 5(2): 224-234, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33748562

ABSTRACT

Low-dose computed tomography (LDCT) is desired due to prevalence and ionizing radiation of CT, but suffers elevated noise. To improve LDCT image quality, an image-domain denoising method based on cycle-consistent generative adversarial network ("CycleGAN") is developed and compared with two other variants, IdentityGAN and GAN-CIRCLE. Different from supervised deep learning methods, these unpaired methods can effectively learn image translation from the low-dose domain to the full-dose (FD) domain without the need of aligning FDCT and LDCT images. The results on real and synthetic patient CT data show that these methods can achieve peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) comparable to, if not better than, the other state-of-the-art denoising methods. Among CycleGAN, IdentityGAN, and GAN-CIRCLE, the later achieves the best denoising performance with the shortest computation time. Subsequently, GAN-CIRCLE is used to demonstrate that the increasing number of training patches and of training patients can improve denoising performance. Finally, two non-overlapping experiments, i.e. no counterparts of FDCT and LDCT images in the training data, further demonstrate the effectiveness of unpaired learning methods. This work paves the way for applying unpaired deep learning methods to enhance LDCT images without requiring aligned full-dose and low-dose images from the same patient.

9.
Phys Med Biol ; 66(6): 065016, 2021 03 09.
Article in English | MEDLINE | ID: mdl-33571980

ABSTRACT

With the goal of developing a total-body small-animal PET system with a high spatial resolution of ∼0.5 mm and a high sensitivity >10% for mouse/rat studies, we simulated four scanners using the graphical processing unit-based Monte Carlo simulation package (gPET) and compared their performance in terms of spatial resolution and sensitivity. We also investigated the effect of depth-of-interaction (DOI) resolution on the spatial resolution. All the scanners are built upon 128 DOI encoding dual-ended readout detectors with lutetium yttrium oxyorthosilicate (LYSO) arrays arranged in 8 detector rings. The solid angle coverages of the four scanners are all ∼0.85 steradians. Each LYSO element has a cross-section of 0.44 × 0.44 mm2 and the pitch size of the LYSO arrays are all 0.5 mm. The four scanners can be divided into two groups: (1) H2RS110-C10 and H2RS110-C20 with 40 × 40 LYSO arrays, a ring diameter of 110 mm and axial length of 167 mm, and (2) H2RS160-C10 and H2RS160-C20 with 60 × 60 LYSO arrays, a diameter of 160 mm and axial length of 254 mm. C10 and C20 denote the crystal thickness of 10 and 20 mm, respectively. The simulation results show that all scanners have a spatial resolution better than 0.5 mm at the center of the field-of-view (FOV). The radial resolution strongly depends on the DOI resolution and radial offset, but not the axial resolution and tangential resolution. Comparing the C10 and C20 designs, the former provides better resolution, especially at positions away from the center of the FOV, whereas the latter has 2× higher sensitivity (∼10% versus ∼20%). This simulation study provides evidence that the 110 mm systems are a good choice for total-body mouse studies at a lower cost, whereas the 160 mm systems are suited for both total-body mouse and rat studies.


Subject(s)
Equipment Design , Lutetium/chemistry , Positron-Emission Tomography/instrumentation , Positron-Emission Tomography/methods , Silicates/chemistry , Animals , Computer Simulation , Mice , Monte Carlo Method , Rats , Sensitivity and Specificity
10.
J Photochem Photobiol B ; 214: 112084, 2021 Jan.
Article in English | MEDLINE | ID: mdl-33248881

ABSTRACT

Herein, we report cost effective and body compatible CuS nanoparticles (NPs) derived from a single source precursor as photothermal agent for healing deep cancer and photocatalytic remediation of organic carcinogens. These NPs efficiently kill MCF7 cells (both in vivo and in vitro) under NIR irradiation by raising the temperature of tumor cells. Such materials can be used for the treatment of deep cancer as they can produce a heating effect using high wavelength and deeply penetrating NIR radiation. Furthermore, CuS NPs under solar light irradiation efficiently convert p-nitrophenol (PNP), an environmental carcinogen, to p-aminophenol (PAP) of pharmaceutical implication. In a nutshell, CuS can be used for the treatment of deep cancer and for the remediation of carcinogenic pollutants. There seems an intrinsic connection between the two functions of CuS NPs that need to be explored in length.


Subject(s)
Antineoplastic Agents/chemistry , Carcinogens/chemistry , Copper/chemistry , Metal Nanoparticles/chemistry , Animals , Antineoplastic Agents/pharmacology , Apoptosis/drug effects , Carcinogens/pharmacology , Catalysis , Humans , Infrared Rays , MCF-7 Cells , Mice, SCID , Neoplasms, Experimental , Nitrophenols/chemistry , Photolysis , Phototherapy , Povidone/chemistry
11.
Article in English | MEDLINE | ID: mdl-34040798

ABSTRACT

Range uncertainty remains a big concern in particle therapy, as it may cause target dose degradation and normal tissue overdosing. Positron emission tomography (PET) and prompt gamma imaging (PGI) are two promising modalities for range verification. However, the relatively long acquisition time of PET and the relatively low yield of PGI pose challenges for real-time range verification. In this paper, we explore using the primary Carbon-11 (C-11) ion beams to enhance the gamma yield compared to the primary C-12 ion beams to improve PET and PGI by using Monte Carlo simulations of water and PMMA phantoms at four incident energies (95, 200, 300, and 430 MeV u-1). Prompt gammas (PGs) and annihilation gammas (AGs) were recorded for post-processing to mimic PGI and PET imaging, respectively. We used both time-of-flight (TOF) and energy selections for PGI, which boosted the ratio of PGs to background neutrons to 2.44, up from 0.87 without the selections. At the lowest incident energy (100 MeVu-1), PG yield from C-11 was 0.82 times of that from C-12, while AG yield from C-11 was 6 ∼ 11 folds higher than from C-12 in PMMA. At higher energies, PG differences between C-11 and C-12 were much smaller, while AG yield from C-11 was 30%∼90% higher than from C-12 using minute-acquisition. With minute-acquisition, the AG depth distribution of C-11 showed a sharp peak coincident with the Bragg peak due to the decay of the primary C-11 ions, but that of C-12 had no such one. The high AG yield and distinct peaks could lead to more precise range verification of C-11 than C-12. These results demonstrate that using C-11 ion beams for potentially combined PGI and PET has great potential to improve online single-spot range verification accuracy and precision.


Subject(s)
Monte Carlo Method , Carbon , Carbon Radioisotopes , Polymethyl Methacrylate , Tomography, X-Ray Computed
12.
Phys Med Biol ; 64(24): 245002, 2019 12 13.
Article in English | MEDLINE | ID: mdl-31711051

ABSTRACT

Monte Carlo (MC) simulation method plays an essential role in the refinement and development of positron emission tomography (PET) systems. However, most existing MC simulation packages suffer from long execution time for practical PET simulations. To fully address this issue, we developed and validated gPET, a graphics processing unit (GPU)-based MC simulation tool for PET. gPET was built on the NVidia CUDA platform. The simulation process was modularized into three functional parts and carried out by the GPU parallel threads: (1) source management, including positron decay, transport and annihilation; (2) gamma transport inside the phantom; and (3) signal detection and processing inside the detector. A hybrid of voxelized (for patient phantoms) and parametrized (for detectors) geometries were employed to sufficiently support particle navigations. Multiple inputs and outputs were available. Hence, a user can flexibly examine different aspects of a PET simulation. We evaluated the performance of gPET in three test cases with benchmark work from GATE8.0, in terms of the testing of the functional modules, the physics models used for gamma transport inside the detector, and the geometric configuration of an irregularly shaped PET detector. Both accuracy and efficiency were quantified. In all test cases, the differences between gPET and GATE for the coincidences with respect to the energy and crystal index distributions are below 3.18% and 2.54%, respectively. The speedup factor is 500 for gPET on a single Titan Xp GPU (1.58 GHz) over GATE8.0 on a single core of Intel i7-6850K CPU (3.6 GHz) for all test cases. In summary, gPET is an accurate and efficient MC simulation tool for PET.


Subject(s)
Computer Simulation/standards , Software/standards , Tomography, X-Ray Computed/methods , Humans , Monte Carlo Method , Phantoms, Imaging , Reproducibility of Results
13.
Quant Infrared Thermogr J ; 15(2): 223-239, 2018.
Article in English | MEDLINE | ID: mdl-30542379

ABSTRACT

An infrared (IR) thermal camera may provide a tool for real-time temperature monitoring for precise disease treatment using heat generated by light-induced photosensitisers, i.e. photothermal/ablation therapies. In this work, we quantitatively demonstrated that the spatial resolution of a low-cost low-resolution IR camera could be improved via two deconvolution methods. The camera point spread function (PSF) was modeled experimentally and used to develop the deconvolution methods: 1) Richardson-Lucy blind deconvolution (BD); and 2) total variation constrained deconvolution (TD). The experimental results showed the improved spatial resolution (at 50% modulation transfer function (MTF): from the original 1.1 cycles/mm to 2.6 cycles/mm for the BD method and to 4.8 cycles/mm for the TD method) as well as contrast-to-noise ratio. With a properly chosen parameter, the TD method can resolve 1-mm size objects with the accurate temperature reading. The thermal image from the low-resolution IR camera enhanced by the TD method is comparable to that from a high-resolution IR camera. These results show that the TD method provides an effective way to improve the thermal image quality from a low-cost IR camera to monitor temperature of an object of 1-mm size, which meets the needed precision for advanced laser scanning protocols in photothermal/ablation therapies.

14.
Appl Opt ; 57(10): 2376-2382, 2018 Apr 01.
Article in English | MEDLINE | ID: mdl-29714224

ABSTRACT

A snapshot imaging polarimeter using spatial modulation can encode four Stokes parameters allowing instantaneous polarization measurement from a single interferogram. However, the reconstructed polarization images could suffer a severe aliasing signal if the high-frequency component of the intensity image is prominent and occurs in the polarization channels, and the reconstructed intensity image also suffers reduction of spatial resolution due to low-pass filtering. In this work, a method using two anti-phase snapshots is proposed to address the two problems simultaneously. The full-resolution target image and the pure interference fringes can be obtained from the sum and the difference of the two anti-phase interferograms, respectively. The polarization information reconstructed from the pure interference fringes does not contain the aliasing signal from the high-frequency component of the object intensity image. The principles of the method are derived and its feasibility is tested by both computer simulation and a verification experiment. This work provides a novel method for spatially modulated imaging polarization technology with two snapshots to simultaneously reconstruct a full-resolution object intensity image and high-quality polarization components.

15.
Phys Med Biol ; 63(11): 115007, 2018 05 29.
Article in English | MEDLINE | ID: mdl-29722297

ABSTRACT

Four-dimensional (4D) x-ray cone-beam computed tomography (CBCT) is important for a precise radiation therapy for lung cancer. Due to the repeated use and 4D acquisition over a course of radiotherapy, the radiation dose becomes a concern. Meanwhile, the scatter contamination in CBCT deteriorates image quality for treatment tasks. In this work, we propose the use of a moving blocker (MB) during the 4D CBCT acquisition ('4D MB') and to combine motion-compensated reconstruction to address these two issues simultaneously. In 4D MB CBCT, the moving blocker reduces the x-ray flux passing through the patient and collects the scatter information in the blocked region at the same time. The scatter signal is estimated from the blocked region for correction. Even though the number of projection views and projection data in each view are not complete for conventional reconstruction, 4D reconstruction with a total-variation (TV) constraint and a motion-compensated temporal constraint can utilize both spatial gradient sparsity and temporal correlations among different phases to overcome the missing data problem. The feasibility simulation studies using the 4D NCAT phantom showed that 4D MB with motion-compensated reconstruction with 1/3 imaging dose reduction could produce satisfactory images and achieve 37% improvement on structural similarity (SSIM) index and 55% improvement on root mean square error (RMSE), compared to 4D reconstruction at the regular imaging dose without scatter correction. For the same 4D MB data, 4D reconstruction outperformed 3D TV reconstruction by 28% on SSIM and 34% on RMSE. A study of synthetic patient data also demonstrated the potential of 4D MB to reduce the radiation dose by 1/3 without compromising the image quality. This work paves the way for more comprehensive studies to investigate the dose reduction limit offered by this novel 4D MB method using physical phantom experiments and real patient data based on clinical relevant metrics.


Subject(s)
Algorithms , Cone-Beam Computed Tomography/methods , Four-Dimensional Computed Tomography/methods , Lung Neoplasms/radiotherapy , Phantoms, Imaging , Humans , Image Processing, Computer-Assisted/methods , Lung Neoplasms/diagnostic imaging , Movement , Radiation Dosage , Scattering, Radiation
16.
J Neurosci Methods ; 302: 35-41, 2018 05 15.
Article in English | MEDLINE | ID: mdl-29486213

ABSTRACT

BACKGROUND: There is a spectrum of the progression from healthy control (HC) to mild cognitive impairment (MCI) without conversion to Alzheimer's disease (AD), to MCI with conversion to AD (cMCI), and to AD. This study aims to predict the different disease stages using brain structural information provided by magnetic resonance imaging (MRI) data. NEW METHOD: The neighborhood component analysis (NCA) is applied to select most powerful features for prediction. The ensemble decision tree classifier is built to predict which group the subject belongs to. The best features and model parameters are determined by cross validation of the training data. RESULTS: Our results show that 16 out of a total of 429 features were selected by NCA using 240 training subjects, including MMSE score and structural measures in memory-related regions. The boosting tree model with NCA features can achieve prediction accuracy of 56.25% on 160 test subjects. COMPARISON WITH EXISTING METHOD(S): Principal component analysis (PCA) and sequential feature selection (SFS) are used for feature selection, while support vector machine (SVM) is used for classification. The boosting tree model with NCA features outperforms all other combinations of feature selection and classification methods. CONCLUSIONS: The results suggest that NCA be a better feature selection strategy than PCA and SFS for the data used in this study. Ensemble tree classifier with boosting is more powerful than SVM to predict the subject group. However, more advanced feature selection and classification methods or additional measures besides structural MRI may be needed to improve the prediction performance.


Subject(s)
Alzheimer Disease/diagnostic imaging , Brain/diagnostic imaging , Machine Learning , Alzheimer Disease/classification , Alzheimer Disease/pathology , Brain/pathology , Cognitive Dysfunction/classification , Cognitive Dysfunction/diagnostic imaging , Cognitive Dysfunction/pathology , Decision Trees , Disease Progression , Female , Humans , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging , Male , Pattern Recognition, Automated , Principal Component Analysis
17.
PLoS One ; 12(12): e0189620, 2017.
Article in English | MEDLINE | ID: mdl-29267307

ABSTRACT

Scatter contamination is one of the main sources of decreasing the image quality in cone-beam computed tomography (CBCT). The moving blocker method is economic and effective for scatter correction (SC), which can simultaneously estimate scatter and reconstruct the complete volume within the field of view (FOV) from a single CBCT scan. However, at the regions with large intensity transition in the projection images along the axial blocker moving direction, the estimation of scatter signal from blocked regions in a single projection view can produce large error and cause significant artifacts in reconstructed images and null the usability of these regions. Furthermore, blocker edge detection error can significantly deteriorate both primary signal and scatter signal estimation and lead to unacceptable reconstruction results. In this study, we propose to use the adjacent multi-view projection images to jointly estimate scatter signal more accurately. In return, the more accurately estimated scatter signal can be utilized to detect blocker edges more accurately for greatly improved robustness of moving-blocker based SC. The experimental results using a Catphan phantom and an anthropomorphic pelvis phantom CBCT data show that the new method can effectively suppress the estimation errors of scatter signal in the fast signal transition regions and is able to correct the blocker detection errors. This development will expand the utility of moving-blocker based SC for the target with sharp intensity changes in the projection images and provide the needed robustness for its clinical translation.


Subject(s)
Cone-Beam Computed Tomography/methods , Humans , Pelvis/diagnostic imaging , Phantoms, Imaging
18.
PLoS One ; 12(3): e0172938, 2017.
Article in English | MEDLINE | ID: mdl-28253298

ABSTRACT

The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification.


Subject(s)
Tomography, X-Ray Computed/methods , Algorithms , Humans , Models, Theoretical , Phantoms, Imaging
19.
Phys Med Biol ; 61(15): 5639-61, 2016 08 07.
Article in English | MEDLINE | ID: mdl-27385378

ABSTRACT

In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10-40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET.


Subject(s)
Four-Dimensional Computed Tomography/methods , Image Processing, Computer-Assisted/methods , Respiratory-Gated Imaging Techniques/methods , Algorithms , Artifacts , Humans , Motion , Phantoms, Imaging
20.
Comput Biol Med ; 56: 97-106, 2015 Jan.
Article in English | MEDLINE | ID: mdl-25464352

ABSTRACT

In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm.


Subject(s)
Image Processing, Computer-Assisted/methods , Models, Theoretical , Tomography, X-Ray Computed/methods , Humans
SELECTION OF CITATIONS
SEARCH DETAIL