RESUMEN
PURPOSE: A novel phantom-imaging platform, a set of software tools, for automated and high-precision imaging of the American College of Radiology (ACR) positron emission tomography (PET) phantom for PET/magnetic resonance (PET/MR) and PET/computed tomography (PET/CT) systems is proposed. METHODS: The key feature of this platform is the vector graphics design that facilitates the automated measurement of the knife-edge response function and hence image resolution, using composite volume of interest templates in a 0.5 mm resolution grid applied to all inserts of the phantom. Furthermore, the proposed platform enables the generation of an accurate µ $\mu$ -map for PET/MR systems with a robust alignment based on two-stage image registration using specifically designed PET templates. The proposed platform is based on the open-source NiftyPET software package used to generate multiple list-mode data bootstrap realizations and image reconstructions to determine the precision of the two-stage registration and any image-derived statistics. For all the analyses, iterative image reconstruction was employed with and without modeled shift-invariant point spread function and with varying iterations of the ordered subsets expectation maximization (OSEM) algorithm. The impact of the activity outside the field of view (FOV) was assessed using two acquisitions of 30 min each, with and without the activity outside the FOV. RESULTS: The utility of the platform has been demonstrated by providing a standard and an advanced phantom analysis including the estimation of spatial resolution using all cylindrical inserts. In the imaging planes close to the edge of the axial FOV, we observed deterioration in the quantitative accuracy, reduced resolution (FWHM increased by 1-2 mm), reduced contrast, and background uniformity due to the activity outside the FOV. Although it slows convergence, the PSF reconstruction had a positive impact on resolution and contrast recovery, but the degree of improvement depended on the regions. The uncertainty analysis based on bootstrap resampling of raw PET data indicated high precision of the two-stage registration. CONCLUSIONS: We demonstrated that phantom imaging using the proposed methodology with the metric of spatial resolution and multiple bootstrap realizations may be helpful in more accurate evaluation of PET systems as well as in facilitating fine tuning for optimal imaging parameters in PET/MR and PET/CT clinical research studies.
Asunto(s)
Tomografía Computarizada por Tomografía de Emisión de Positrones , Tomografía de Emisión de Positrones , Algoritmos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Fantasmas de Imagen , Tomografía de Emisión de Positrones/métodos , Programas InformáticosRESUMEN
(18)F-fluoro-deoxy-glucose ((18)F-FDG) positron emission tomography (PET) is one of the most sensitive and specific imaging modalities for the diagnosis of non-small cell lung cancer. A drawback of PET is that it requires several minutes of acquisition per bed position, which results in images being affected by respiratory blur. Respiratory gating techniques have been developed to deal with respiratory motion in the PET images. However, these techniques considerably increase the level of noise in the reconstructed images unless the acquisition time is increased. The aim of this paper is to evaluate a four-dimensional (4D) image reconstruction algorithm that combines the acquired events in all the gates whilst preserving the motion deblurring. This algorithm was compared to classic ordered subset expectation maximization (OSEM) reconstruction of gated and non-gated images, and to temporal filtering of gated images reconstructed with OSEM. Two datasets were used for comparing the different reconstruction approaches: one involving the NEMA IEC/2001 body phantom in motion, the other obtained using Monte-Carlo simulations of the NCAT breathing phantom. Results show that 4D reconstruction reaches a similar performance in terms of the signal-to-noise ratio (SNR) as non-gated reconstruction whilst preserving the motion deblurring. In particular, 4D reconstruction improves the SNR compared to respiratory-gated images reconstructed with the OSEM algorithm. Temporal filtering of the OSEM-reconstructed images helps improve the SNR, but does not achieve the same performance as 4D reconstruction. 4D reconstruction of respiratory-gated images thus appears as a promising tool to reach the same performance in terms of the SNR as non-gated acquisitions while reducing the motion blur, without increasing the acquisition time.
Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Tomografía de Emisión de Positrones/métodos , Técnicas de Imagen Sincronizada Respiratorias/métodos , Artefactos , Humanos , Modelos Biológicos , Movimiento , Fantasmas de Imagen , Reproducibilidad de los ResultadosRESUMEN
A new technique for modelling multiple-order Compton scatter which uses the absolute probabilities relating the image space to the projection space in 3D whole body PET is presented. The details considered in this work give a valuable insight into the scatter problem, particularly for multiple scatter. Such modelling is advantageous for large attenuating media where scatter is a dominant component of the measured data, and where multiple scatter may dominate the total scatter depending on the energy threshold and object size. The model offers distinct features setting it apart from previous research: (1) specification of the scatter distribution for each voxel based on the transmission data, the physics of Compton scattering and the specification of a given PET system; (2) independence from the true activity distribution; (3) in principle no scaling or iterative process is required to find the distribution; (4) explicit multiple scatter modelling; (5) no scatter subtraction or addition to the forward model when included in the system matrix used with statistical image reconstruction methods; (6) adaptability to many different scatter compensation methods from simple and fast to more sophisticated and therefore slower methods; (7) accuracy equivalent to that of a Monte Carlo model. The scatter model has been validated using Monte Carlo simulation (SimSET).
Asunto(s)
Tomografía de Emisión de Positrones/estadística & datos numéricos , Algoritmos , Fenómenos Biofísicos , Biofisica , Humanos , Imagenología Tridimensional/estadística & datos numéricos , Modelos Teóricos , Método de Montecarlo , Fotones , Dispersión de RadiaciónRESUMEN
Respiratory motion is a source of artefacts and reduced image quality in PET. Proposed methodology for correction of respiratory effects involves the use of gated frames, which are however of low signal-to-noise ratio. Therefore a method accounting for respiratory motion effects without affecting the statistical quality of the reconstructed images is necessary. We have implemented an affine transformation of list mode data for the correction of respiratory motion over the thorax. The study was performed using datasets of the NCAT phantom at different points throughout the respiratory cycle. List mode data based PET simulated frames were produced by combining the NCAT datasets with a Monte Carlo simulation. Transformation parameters accounting for respiratory motion were estimated according to an affine registration and were subsequently applied on the original list mode data. The corrected and uncorrected list mode datasets were subsequently reconstructed using the one-pass list mode EM (OPL-EM) algorithm. Comparison of corrected and uncorrected respiratory motion average frames suggests that an affine transformation in the list mode data prior to reconstruction can produce significant improvements in accounting for respiratory motion artefacts in the lungs and heart. However, the application of a common set of transformation parameters across the imaging field of view does not significantly correct the respiratory effects on organs such as the stomach, liver or spleen.
Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias/diagnóstico , Neoplasias/patología , Tomografía de Emisión de Positrones/métodos , Respiración , Algoritmos , Simulación por Computador , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Pulmón/patología , Modelos Estadísticos , Método de Montecarlo , Miocardio/patología , Fantasmas de Imagen , Programas InformáticosRESUMEN
Respiratory motion in emission tomography leads to reduced image quality. Developed correction methodology has been concentrating on the use of respiratory synchronized acquisitions leading to gated frames. Such frames, however, are of low signal-to-noise ratio as a result of containing reduced statistics. In this work, we describe the implementation of an elastic transformation within a list-mode-based reconstruction for the correction of respiratory motion over the thorax, allowing the use of all data available throughout a respiratory motion average acquisition. The developed algorithm was evaluated using datasets of the NCAT phantom generated at different points throughout the respiratory cycle. List-mode-data-based PET-simulated frames were subsequently produced by combining the NCAT datasets with Monte Carlo simulation. A non-rigid registration algorithm based on B-spline basis functions was employed to derive transformation parameters accounting for the respiratory motion using the NCAT dynamic CT images. The displacement matrices derived were subsequently applied during the image reconstruction of the original emission list mode data. Two different implementations for the incorporation of the elastic transformations within the one-pass list mode EM (OPL-EM) algorithm were developed and evaluated. The corrected images were compared with those produced using an affine transformation of list mode data prior to reconstruction, as well as with uncorrected respiratory motion average images. Results demonstrate that although both correction techniques considered lead to significant improvements in accounting for respiratory motion artefacts in the lung fields, the elastic-transformation-based correction leads to a more uniform improvement across the lungs for different lesion sizes and locations.
Asunto(s)
Algoritmos , Artefactos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Movimiento , Tomografía de Emisión de Positrones/métodos , Mecánica Respiratoria , Fantasmas de Imagen , Tomografía de Emisión de Positrones/instrumentación , Reproducibilidad de los Resultados , Sensibilidad y EspecificidadRESUMEN
Component-based normalization is a method used to compensate for the sensitivity of each of the lines of response acquired in positron emission tomography. This method consists of modelling the sensitivity of each line of response as a product of multiple factors, which can be classified as time-invariant, time-variant and acquisition-dependent components. Typical time-variant factors are the intrinsic crystal efficiencies, which are needed to be updated by a regular normalization scan. Failure to do so would in principle generate artifacts in the reconstructed images due to the use of out of date time-variant factors. For this reason, an assessment of the variability and the impact of the crystal efficiencies in the reconstructed images is important to determine the frequency needed for the normalization scans, as well as to estimate the error obtained when an inappropriate normalization is used. Furthermore, if the fluctuations of these components are low enough, they could be neglected and nearly artifact-free reconstructions become achievable without performing a regular normalization scan. In this work, we analyse the impact of the time-variant factors in the component-based normalization used in the Biograph mMR scanner, but the work is applicable to other PET scanners. These factors are the intrinsic crystal efficiencies and the axial factors. For the latter, we propose a new method to obtain fixed axial factors that was validated with simulated data. Regarding the crystal efficiencies, we assessed their fluctuations during a period of 230 d and we found that they had good stability and low dispersion. We studied the impact of not including the intrinsic crystal efficiencies in the normalization when reconstructing simulated and real data. Based on this assessment and using the fixed axial factors, we propose the use of a time-invariant normalization that is able to achieve comparable results to the standard, daily updated, normalization factors used in this scanner. Moreover, to extend the analysis to other scanners, we generated distributions of crystal efficiencies with greater fluctuations than those found in the Biograph mMR scanner and evaluated their impact in simulations with a wide variety of noise levels. An important finding of this work is that a regular normalization scan is not needed in scanners with photodetectors with relatively low dispersion in their efficiencies.
Asunto(s)
Procesamiento de Imagen Asistido por Computador/normas , Imagen por Resonancia Magnética/normas , Imagen Multimodal/normas , Fantasmas de Imagen , Tomografía de Emisión de Positrones/normas , Artefactos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Imagen Multimodal/métodos , Tomografía de Emisión de Positrones/métodos , Factores de TiempoRESUMEN
New molecular imaging technologies are being developed specifically for imaging animal models of human disease. Positron emission tomography (PET) in particular allows in vivo biochemistry to be studied with a high degree of sensitivity and specificity, and provides direct in vivo information on molecular and cellular pathways that underlie disease mechanisms and therapeutics. However, clinical PET systems have inadequate resolution for imaging small animals. Thus, specialized high-resolution PET hardware and software are now being developed.
Asunto(s)
Diagnóstico por Imagen/tendencias , Animales , Expresión Génica/fisiología , Humanos , Imagen por Resonancia Magnética , Tomografía Computarizada de EmisiónRESUMEN
The PETRRA positron camera is a large-area (600 mm x 400 mm sensitive area) prototype system that has been developed through a collaboration between the Rutherford Appleton Laboratory and the Institute of Cancer Research/Royal Marsden Hospital. The camera uses novel technology involving the coupling of 10 mm thick barium fluoride scintillating crystals to multi-wire proportional chambers filled with a photosensitive gas. The performance of the camera is reported here and shows that the present system has a 3D spatial resolution of approximately 7.5 mm full-width-half-maximum (FWHM), a timing resolution of approximately 3.5 ns (FWHM), a total coincidence count-rate performance of at least 80-90 kcps and a randoms-corrected sensitivity of approximately 8-10 kcps kBq(-1) ml. For an average concentration of 3 kBq ml(-1) as expected in a patient it is shown that, for the present prototype, approximately 20% of the data would be true events. The count-rate performance is presently limited by the obsolete off-camera read-out electronics and computer system and the sensitivity by the use of thin (10 mm thick) crystals. The prototype camera has limited scatter rejection and no intrinsic shielding and is, therefore, susceptible to high levels of scatter and out-of-field activity when imaging patients. All these factors are being addressed to improve the performance of the camera. The large axial field-of-view of 400 mm makes the camera ideally suited to whole-body PET imaging. We present examples of preliminary clinical images taken with the prototype camera. Overall, the results show the potential for this alternative technology justifying further development.
Asunto(s)
Cámaras gamma , Aumento de la Imagen/instrumentación , Interpretación de Imagen Asistida por Computador/instrumentación , Tomografía de Emisión de Positrones/instrumentación , Procesamiento de Señales Asistido por Computador/instrumentación , Transductores , Diseño de Equipo , Análisis de Falla de Equipo , Humanos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Proyectos Piloto , Reproducibilidad de los Resultados , Sensibilidad y EspecificidadRESUMEN
Bootstrap resampling has been successfully used for estimation of statistical uncertainty of parameters such as tissue metabolism, blood flow or displacement fields for image registration. The performance of bootstrap resampling as applied to PET list-mode data of the human brain and dedicated phantoms is assessed in a novel and systematic way such that: (1) the assessment is carried out in two resampling stages: the 'real world' stage where multiple reference datasets of varying statistical level are generated and the 'bootstrap world' stage where corresponding bootstrap replicates are generated from the reference datasets. (2) All resampled datasets were reconstructed yielding images from which multiple voxel and regions of interest (ROI) values were extracted to form corresponding distributions between the two stages. (3) The difference between the distributions from both stages was quantified using the Jensen-Shannon divergence and the first four moments. It was found that the bootstrap distributions are consistently different to the real world distributions across the statistical levels. The difference was explained by a shift in the mean (up to 33% for voxels and 14% for ROIs) being proportional to the inverse square root of the statistical level (number of counts). Other moments were well replicated by the bootstrap although for very low statistical levels the estimation of the variance was poor. Therefore, the bootstrap method should be used with care when estimating systematic errors (bias) and variance when very low statistical levels are present such as in early time frames of dynamic acquisitions, when the underlying population may not be sufficiently represented.
Asunto(s)
Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Modelos Teóricos , Fantasmas de Imagen , Tomografía de Emisión de Positrones/métodos , Procesamiento de Señales Asistido por Computador , Médula Espinal/diagnóstico por imagen , Algoritmos , Simulación por Computador , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Funciones de Verosimilitud , Método de Montecarlo , Tomografía de Emisión de Positrones/instrumentaciónRESUMEN
A fast accurate iterative reconstruction (FAIR) method suitable for low-statistics positron volume imaging has been developed. The method, based on the expectation maximization-maximum likelihood (EM-ML) technique, operates on list-mode data rather than histogrammed projection data and can, in just one pass through the data, generate images with the same characteristics as several ML iterations. Use of list-mode data preserves maximum sampling accuracy and implicitly ignores lines of response (LORs) in which no counts were recorded. The method is particularly suited to systems where sampling accuracy can be lost by histogramming events into coarse LOR bins, and also to sparse data situations such as fast whole-body and dynamic imaging where sampling accuracy may be compromised by storage requirements and where reconstruction time can be wasted by including LORs with no counts. The technique can be accelerated by operating on subsets of list-mode data which also allows scope for simultaneous data acquisition and iterative reconstruction. The method is compared with a standard implementation of the EM-ML technique and is shown to offer improved resolution, contrast and noise properties as a direct result of using improved spatial sampling, limited only by hardware specifications.
Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen , Tomografía Computarizada de Emisión , Funciones de Verosimilitud , Modelos Teóricos , Reproducibilidad de los Resultados , Sensibilidad y EspecificidadRESUMEN
We have investigated statistical list-mode reconstruction applicable to a depth-encoding high resolution research tomograph. An image non-negativity constraint has been employed in the reconstructions and is shown to effectively remove the overestimation bias introduced by the sinogram non-negativity constraint. We have furthermore implemented a convergent subsetized (CS) list-mode reconstruction algorithm, based on previous work (Hsiao et al 2002 Conf. Rec. SPIE Med. Imaging 4684 10-19; Hsiao et al 2002 Conf. Rec. IEEE Int. Symp. Biomed. Imaging 409-12) on convergent histogram OSEM reconstruction. We have demonstrated that the first step of the convergent algorithm is exactly equivalent (unlike the histogram-mode case) to the regular subsetized list-mode EM algorithm, while the second and final step takes the form of additive updates in image space. We have shown that in terms of contrast, noise as well as FWHM width behaviour, the CS algorithm is robust and does not result in limit cycles. A hybrid algorithm based on the ordinary and the convergent algorithms is also proposed, and is shown to combine the advantages of the two algorithms (i.e. it is able to reach a higher image quality in fewer iterations while maintaining the convergent behaviour), making the hybrid approach a good alternative to the ordinary subsetized list-mode EM algorithm.
Asunto(s)
Algoritmos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Tomografía de Emisión de Positrones/métodos , Modelos Biológicos , Modelos Estadísticos , Fantasmas de Imagen , Tomografía de Emisión de Positrones/instrumentación , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Técnica de SustracciónRESUMEN
Four reconstruction techniques for positron volume imaging have been evaluated for scanners based on rotating planar detectors using measured and simulated data. The four techniques compared are backproject then filter (BPF), the 3D reprojection (3D RP) method for 3D filtered backprojection (FBP), Fourier rebinning (FORE) in conjunction with 2D FBP (FORE + 2D FBP) and 3D ordered subsets expectation maximization (3D OSEM). The comparison was based on image resolution and on the trade-off between contrast and noise. In general FORE + 2D FBP offered a better contrast-noise trade-off than 3D RP, whilst 3D RP offered a better trade-off than BPF. Unlike 3D RP, FORE + 2D FBP did not suffer any contrast degradation effect at the edges of the axial field of view, but was unable to take as much advantage from high-accuracy data as the other methods. 3D OSEM gave the best contrast at the expense of greater image noise. BPF, which demonstrated generally inferior contrast-noise behaviour due to use of only a subset of the data, gave more consistent spatial resolution over the field of view than the projection-data based methods, and was best at taking full advantage of high-accuracy data.
Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen , Tomografía Computarizada de Emisión/instrumentación , Tomografía Computarizada de Emisión/métodos , Simulación por Computador , Diseño de Equipo , Análisis de Fourier , Cámaras gamma , Procesamiento de Imagen Asistido por Computador/instrumentación , Modelos Teóricos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Radioisótopos de SodioRESUMEN
Parametric imaging in thoracic and abdominal PET can provide additional parameters more relevant to the pathophysiology of the system under study. However, dynamic data in the body are noisy due to the limiting counting statistics leading to suboptimal kinetic parameter estimates. Direct 4D image reconstruction algorithms can potentially improve kinetic parameter precision and accuracy in dynamic PET body imaging. However, construction of a common kinetic model is not always feasible and in contrast to post-reconstruction kinetic analysis, errors in poorly modelled regions may spatially propagate to regions which are well modelled. To reduce error propagation from erroneous model fits, we implement and evaluate a new approach to direct parameter estimation by incorporating a recently proposed kinetic modelling strategy within a direct 4D image reconstruction framework. The algorithm uses a secondary more general model to allow a less constrained model fit in regions where the kinetic model does not accurately describe the underlying kinetics. A portion of the residuals then is adaptively included back into the image whilst preserving the primary model characteristics in other well modelled regions using a penalty term that trades off the models. Using fully 4D simulations based on dynamic [(15)O]H2O datasets, we demonstrate reduction in propagation-related bias for all kinetic parameters. Under noisy conditions, reductions in bias due to propagation are obtained at the cost of increased noise, which in turn results in increased bias and variance of the kinetic parameters. This trade-off reflects the challenge of separating the residuals arising from poor kinetic modelling fits from the residuals arising purely from noise. Nonetheless, the overall root mean square error is reduced in most regions and parameters. Using the adaptive 4D image reconstruction improved model fits can be obtained in poorly modelled regions, leading to reduced errors potentially propagating to regions of interest which the primary biologic model accurately describes. The proposed methodology, however, depends on the secondary model and choosing an optimal model on the residual space is critical in improving model fits.
Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Modelos Biológicos , Tomografía de Emisión de Positrones/métodos , Algoritmos , CinéticaRESUMEN
Voxel based morphometry (VBM) is a widely used technique for studying the structure of the brain. Direct comparisons between the results obtained using VBM and the underlying histology are limited, however. To circumvent the problems inherent in comparing VBM data in vivo with tissue samples that must generally be obtained post-mortem, we chose to consider GABAA receptors, measured using (18)F-flumazenil PET (18F-FMZ-PET), as non-invasive neural markers to be compared with VBM data. Consistent with previous cortical thickness findings, GABAA receptor binding potential (BPND) was found to correlate positively across regions with grey matter (GM) density. These findings confirm that there is a general positive relationship between MRI-based GM density measures and GABAA receptor BPND on a region-by-region basis (i.e., regions with more GM tend to also have higher BPND).
Asunto(s)
Encéfalo/anatomía & histología , Corteza Cerebral/anatomía & histología , Corteza Cerebral/metabolismo , Receptores de GABA-A/metabolismo , Adolescente , Adulto , Encéfalo/diagnóstico por imagen , Corteza Cerebral/diagnóstico por imagen , Femenino , Flumazenil/análogos & derivados , Moduladores del GABA , Humanos , Procesamiento de Imagen Asistido por Computador , Modelos Lineales , Imagen por Resonancia Magnética , Masculino , Tomografía de Emisión de Positrones , Radiofármacos , Adulto JovenRESUMEN
Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.
Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Modelos Teóricos , Fluorodesoxiglucosa F18 , Humanos , Imagenología Tridimensional , Fantasmas de Imagen , Tomografía de Emisión de Positrones , Factores de TiempoRESUMEN
Iterative expectation maximization (EM) techniques have been extensively used to solve maximum likelihood (ML) problems in positron emission tomography (PET) image reconstruction. Although EM methods offer a robust approach to solving ML problems, they usually suffer from slow convergence rates. The ordered subsets EM (OSEM) algorithm provides significant improvements in the convergence rate, but it can cycle between estimates converging towards the ML solution of each subset. In contrast, gradient-based methods, such as the recently proposed non-monotonic maximum likelihood (NMML) and the more established preconditioned conjugate gradient (PCG), offer a globally convergent, yet equally fast, alternative to OSEM. Reported results showed that NMML provides faster convergence compared to OSEM; however, it has never been compared to other fast gradient-based methods, like PCG. Therefore, in this work we evaluate the performance of two gradient-based methods (NMML and PCG) and investigate their potential as an alternative to the fast and widely used OSEM. All algorithms were evaluated using 2D simulations, as well as a single [(11)C]DASB clinical brain dataset. Results on simulated 2D data show that both PCG and NMML achieve orders of magnitude faster convergence to the ML solution compared to MLEM and exhibit comparable performance to OSEM. Equally fast performance is observed between OSEM and PCG for clinical 3D data, but NMML seems to perform poorly. However, with the addition of a preconditioner term to the gradient direction, the convergence behaviour of NMML can be substantially improved. Although PCG is a fast convergent algorithm, the use of a (bent) line search increases the complexity of the implementation, as well as the computational time involved per iteration. Contrary to previous reports, NMML offers no clear advantage over OSEM or PCG, for noisy PET data. Therefore, we conclude that there is little evidence to replace OSEM as the algorithm of choice for many applications, especially given that in practice convergence is often not desired for algorithms seeking ML estimates.
Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía de Emisión de Positrones/métodos , Células Receptoras Sensoriales/diagnóstico por imagen , Compuestos de Anilina , Humanos , Modelos Teóricos , Reproducibilidad de los Resultados , SulfurosRESUMEN
This note presents a practical approach to a custom-made design of PET phantoms enabling the use of digital radioactive distributions with high quantitative accuracy and spatial resolution. The phantom design allows planar sources of any radioactivity distribution to be imaged in transaxial and axial (sagittal or coronal) planes. Although the design presented here is specially adapted to the high-resolution research tomograph (HRRT), the presented methods can be adapted to almost any PET scanner. Although the presented phantom design has many advantages, a number of practical issues had to be overcome such as positioning of the printed source, calibration, uniformity and reproducibility of printing. A well counter (WC) was used in the calibration procedure to find the nonlinear relationship between digital voxel intensities and the actual measured radioactive concentrations. Repeated printing together with WC measurements and computed radiography (CR) using phosphor imaging plates (IP) were used to evaluate the reproducibility and uniformity of such printing. Results show satisfactory printing uniformity and reproducibility; however, calibration is dependent on the printing mode and the physical state of the cartridge. As a demonstration of the utility of using printed phantoms, the image resolution and quantitative accuracy of reconstructed HRRT images are assessed. There is very good quantitative agreement in the calibration procedure between HRRT, CR and WC measurements. However, the high resolution of CR and its quantitative accuracy supported by WC measurements made it possible to show the degraded resolution of HRRT brain images caused by the partial-volume effect and the limits of iterative image reconstruction.
Asunto(s)
Aumento de la Imagen/instrumentación , Fantasmas de Imagen , Tomografía de Emisión de Positrones/instrumentación , Encéfalo/diagnóstico por imagen , Encéfalo/patología , Calibración , Diseño de Equipo , Humanos , Aumento de la Imagen/métodos , Fósforo , Tomografía de Emisión de Positrones/métodos , Reproducibilidad de los ResultadosRESUMEN
Incorporation of a resolution model during statistical image reconstruction often produces images of improved resolution and signal-to-noise ratio. A novel and practical methodology to rapidly and accurately determine the overall emission and detection blurring component of the system matrix using a printed point source array within a custom-made Perspex phantom is presented. The array was scanned at different positions and orientations within the field of view (FOV) to examine the feasibility of extrapolating the measured point source blurring to other locations in the FOV and the robustness of measurements from a single point source array scan. We measured the spatially-variant image-based blurring on two PET/CT scanners, the B-Hi-Rez and the TruePoint TrueV. These measured spatially-variant kernels and the spatially-invariant kernel at the FOV centre were then incorporated within an ordinary Poisson ordered subset expectation maximization (OP-OSEM) algorithm and compared to the manufacturer's implementation using projection space resolution modelling (RM). Comparisons were based on a point source array, the NEMA IEC image quality phantom, the Cologne resolution phantom and two clinical studies (carbon-11 labelled anti-sense oligonucleotide [(11)C]-ASO and fluorine-18 labelled fluoro-l-thymidine [(18)F]-FLT). Robust and accurate measurements of spatially-variant image blurring were successfully obtained from a single scan. Spatially-variant resolution modelling resulted in notable resolution improvements away from the centre of the FOV. Comparison between spatially-variant image-space methods and the projection-space approach (the first such report, using a range of studies) demonstrated very similar performance with our image-based implementation producing slightly better contrast recovery (CR) for the same level of image roughness (IR). These results demonstrate that image-based resolution modelling within reconstruction is a valid alternative to projection-based modelling, and that, when using the proposed practical methodology, the necessary resolution measurements can be obtained from a single scan. This approach avoids the relatively time-consuming and involved procedures previously proposed in the literature.
Asunto(s)
Tomografía de Emisión de Positrones/instrumentación , Impresión/instrumentación , Tomografía Computarizada por Rayos X/instrumentación , Abdomen/diagnóstico por imagen , Radioisótopos de Carbono , Didesoxinucleósidos , Humanos , Procesamiento de Imagen Asistido por Computador , Fantasmas de Imagen , Radiografía AbdominalRESUMEN
Fully 4D PET image reconstruction is receiving increasing research interest due to its ability to significantly reduce spatiotemporal noise in dynamic PET imaging. However, thus far in the literature, the important issue of correcting for subject head motion has not been considered. Specifically, as a direct consequence of using temporally extensive basis functions, a single instance of movement propagates to impair the reconstruction of multiple time frames, even if no further movement occurs in those frames. Existing 3D motion compensation strategies have not yet been adapted to 4D reconstruction, and as such the benefits of 4D algorithms have not yet been reaped in a clinical setting where head movement undoubtedly occurs. This work addresses this need, developing a motion compensation method suitable for fully 4D reconstruction methods which exploits an optical tracking system to measure the head motion along with PET superset data to store the motion compensated data. List-mode events are histogrammed as PET superset data according to the measured motion, and a specially devised normalization scheme for motion compensated reconstruction from the superset data is required. This work proceeds to propose the corresponding time-dependent normalization modifications which are required for a major class of fully 4D image reconstruction algorithms (those which use linear combinations of temporal basis functions). Using realistically simulated as well as real high-resolution PET data from the HRRT, we demonstrate both the detrimental impact of subject head motion in fully 4D PET reconstruction and the efficacy of our proposed modifications to 4D algorithms. Benefits are shown both for the individual PET image frames as well as for parametric images of tracer uptake and volume of distribution for (18)F-FDG obtained from Patlak analysis.
Asunto(s)
Artefactos , Encéfalo/diagnóstico por imagen , Movimientos de la Cabeza , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía de Emisión de Positrones/métodos , Algoritmos , Simulación por Computador , Bases de Datos como Asunto , Fluorodesoxiglucosa F18 , Humanos , Imagenología Tridimensional/métodos , Modelos Lineales , Modelos Biológicos , Movimiento (Física) , Óptica y Fotónica/métodos , Fantasmas de Imagen , Tomografía de Emisión de Positrones/instrumentación , Factores de TiempoRESUMEN
The concept of achieving low-resolution separations in internally heated capillary membranes is discussed in terms of controlling the diffusion coefficients of volatile organic compounds in poly(dimethylsilicone) membranes in space and time. The behaviour of 1,1,1-trichloroethane in polydimethylsilicone was used in conjunction with a mixed-physics finite element model, incorporating second order partial differential equations, to describe time and spatial variations of mass-flux, membrane temperature and diffusion coefficients. The model, coded with Femlab, predicted highly non-linear diffusion coefficient profiles resulting from temperature programming a 500 [micro sign]m thick membrane, with an increase in the diffusion coefficient of approximately 30% in the last 30% of the membrane thickness. Simulations of sampling hypothetical analytes, with disparate temperature dependent diffusion coefficient relationships, predicted distinct thermal desorption profiles with selectivities that reflected the extent of diffusion through the membrane. The predicted desorption profiles of these analytes also indicated that low resolution separations were possible. An internally heated poly(dimethylsilicone) capillary membrane was constructed from a 10 cm long, 1.5 mm od capillary with 0.5 mm thick walls. Thirteen aqueous standards of volatile organic compounds of environmental significance were studied, and low-resolution separations were indicated, with temperature programming of the membrane enabling desorption profiles to be differentiated. Further, analytically useful relationships in the [micro sign]g cm(-3) concentration range were demonstrated with correlation coefficients >0.96 observed for linear regressions of desorption profile intensities to analyte concentrations.