Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 35
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
J Nucl Cardiol ; 29(5): 2487-2496, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34318395

RESUMEN

BACKGROUND: Calcification and inflammation are atherosclerotic plaque compositional biomarkers that have both been linked to stroke risk. The aim of this study was to evaluate their co-existing prevalence in human carotid plaques with respect to plaque phenotype to determine the value of hybrid imaging for the detection of these biomarkers. METHODS: Human carotid plaque segments, obtained from endarterectomy, were incubated in [111In]In-DOTA-butylamino-NorBIRT ([111In]In-Danbirt), targeting Leukocyte Function-associated Antigen-1 (LFA-1) on leukocytes. By performing SPECT/CT, both inflammation from DANBIRT uptake and calcification from CT imaging were assessed. Plaque phenotype was classified using histology. RESULTS: On a total plaque level, comparable levels of calcification volume existed with different degrees of inflammation and vice versa. On a segment level, an inverse relationship between calcification volume and inflammation was evident in highly calcified segments, which classify as fibrocalcific, stable plaque segments. In contrast, segments with little or no calcification presented with a moderate to high degree of inflammation, often coinciding with the more dangerous fibrous cap atheroma phenotype. CONCLUSION: Calcification imaging alone can only accurately identify highly calcified, stable, fibrocalcific plaques. To identify high-risk plaques, with little or no calcification, hybrid imaging of calcification and inflammation could provide diagnostic benefit.


Asunto(s)
Calcinosis , Enfermedades de las Arterias Carótidas , Placa Aterosclerótica , Biomarcadores , Calcinosis/diagnóstico por imagen , Calcinosis/patología , Enfermedades de las Arterias Carótidas/diagnóstico por imagen , Humanos , Radioisótopos de Indio , Inflamación/complicaciones , Inflamación/diagnóstico por imagen , Antígeno-1 Asociado a Función de Linfocito , Placa Aterosclerótica/diagnóstico por imagen , Placa Aterosclerótica/patología , Tomografía Computarizada por Tomografía Computarizada de Emisión de Fotón Único
2.
Phys Med Biol ; 66(6): 065011, 2021 03 04.
Artículo en Inglés | MEDLINE | ID: mdl-33578400

RESUMEN

Despite improvements in small animal PET instruments, many tracers cannot be imaged at sufficiently high resolutions due to positron range, while multi-tracer PET is hampered by the fact that all annihilation photons have equal energies. Here we realize multi-isotope and sub-mm resolution PET of isotopes with several mm positron range by utilizing prompt gamma photons that are commonly neglected. A PET-SPECT-CT scanner (VECTor/CT, MILabs, The Netherlands) equipped with a high-energy cluster-pinhole collimator was used to image 124I and a mix of 124I and 18F in phantoms and mice. In addition to positrons (mean range 3.4 mm) 124I emits large amounts of 603 keV prompt gammas that-aided by excellent energy discrimination of NaI-were selected to reconstruct 124I images that are unaffected by positron range. Photons detected in the 511 keV window were used to reconstruct 18F images. Images were reconstructed iteratively using an energy dependent matrix for each isotope. Correction of 18F images for contamination with 124I annihilation photons was performed by Monte Carlo based range modelling and scaling of the 124I prompt gamma image before subtracting it from the 18F image. Additionally, prompt gamma imaging was tested for 89Zr that emits very high-energy prompts (909 keV). In Derenzo resolution phantoms 0.75 mm rods were clearly discernable for 124I, 89Zr and for simultaneously acquired 124I and 18F imaging. Image quantification in phantoms with reservoirs filled with both 124I and 18F showed excellent separation of isotopes and high quantitative accuracy. Mouse imaging showed uptake of 124I in tiny thyroid parts and simultaneously injected 18F-NaF in bone structures. The ability to obtain PET images at sub-mm resolution both for isotopes with several mm positron range and for multi-isotope PET adds to many other unique capabilities of VECTor's clustered pinhole imaging, including simultaneous sub-mm PET-SPECT and theranostic high energy SPECT.


Asunto(s)
Electrones , Aceleradores de Partículas , Tomografía de Emisión de Positrones/métodos , Tomografía Computarizada de Emisión de Fotón Único/métodos , Tomografía Computarizada por Rayos X/métodos , Animales , Rayos gamma , Radioisótopos de Yodo , Ratones , Método de Montecarlo , Fantasmas de Imagen , Fotones , Tomografía de Emisión de Positrones/instrumentación , Tomografía Computarizada de Emisión de Fotón Único/instrumentación , Tomografía Computarizada por Rayos X/instrumentación
3.
Phys Med Biol ; 52(9): 2567-81, 2007 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-17440253

RESUMEN

State-of-the-art multi-pinhole SPECT devices allow for sub-mm resolution imaging of radio-molecule distributions in small laboratory animals. The optimization of multi-pinhole and detector geometries using simulations based on ray-tracing or Monte Carlo algorithms is time-consuming, particularly because many system parameters need to be varied. As an efficient alternative we develop a continuous analytical model of a pinhole SPECT system with a stationary detector set-up, which we apply to focused imaging of a mouse. The model assumes that the multi-pinhole collimator and the detector both have the shape of a spherical layer, and uses analytical expressions for effective pinhole diameters, sensitivity and spatial resolution. For fixed fields-of-view, a pinhole-diameter adapting feedback loop allows for the comparison of the system resolution of different systems at equal system sensitivity, and vice versa. The model predicts that (i) for optimal resolution or sensitivity the collimator layer with pinholes should be placed as closely as possible around the animal given a fixed detector layer, (ii) with high-resolution detectors a resolution improvement up to 31% can be achieved compared to optimized systems, (iii) high-resolution detectors can be placed close to the collimator without significant resolution losses, (iv) interestingly, systems with a physical pinhole diameter of 0 mm can have an excellent resolution when high-resolution detectors are used.


Asunto(s)
Simulación por Computador , Modelos Teóricos , Tomografía Computarizada de Emisión de Fotón Único/métodos , Animales , Cámaras gamma , Ratones , Método de Montecarlo , Fantasmas de Imagen
4.
Phys Med Biol ; 51(4): 875-89, 2006 Feb 21.
Artículo en Inglés | MEDLINE | ID: mdl-16467584

RESUMEN

Statistical reconstruction methods offer possibilities to improve image quality as compared with analytical methods, but current reconstruction times prohibit routine application in clinical and micro-CT. In particular, for cone-beam x-ray CT, the use of graphics hardware has been proposed to accelerate the forward and back-projection operations, in order to reduce reconstruction times. In the past, wide application of this texture hardware mapping approach was hampered owing to limited intrinsic accuracy. Recently, however, floating point precision has become available in the latest generation commodity graphics cards. In this paper, we utilize this feature to construct a graphics hardware accelerated version of the ordered subset convex reconstruction algorithm. The aims of this paper are (i) to study the impact of using graphics hardware acceleration for statistical reconstruction on the reconstructed image accuracy and (ii) to measure the speed increase one can obtain by using graphics hardware acceleration. We compare the unaccelerated algorithm with the graphics hardware accelerated version, and for the latter we consider two different interpolation techniques. A simulation study of a micro-CT scanner with a mathematical phantom shows that at almost preserved reconstructed image accuracy, speed-ups of a factor 40 to 222 can be achieved, compared with the unaccelerated algorithm, and depending on the phantom and detector sizes. Reconstruction from physical phantom data reconfirms the usability of the accelerated algorithm for practical cases.


Asunto(s)
Algoritmos , Gráficos por Computador , Computadores , Intensificación de Imagen Radiográfica/instrumentación , Interpretación de Imagen Radiográfica Asistida por Computador/instrumentación , Procesamiento de Señales Asistido por Computador/instrumentación , Tomografía Computarizada por Rayos X/instrumentación , Inteligencia Artificial , Análisis por Conglomerados , Sistemas de Computación , Diseño Asistido por Computadora , Estudios de Factibilidad , Fantasmas de Imagen , Intensificación de Imagen Radiográfica/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Tomografía Computarizada por Rayos X/métodos
5.
Phys Med Biol ; 61(11): 4300-15, 2016 06 07.
Artículo en Inglés | MEDLINE | ID: mdl-27206135

RESUMEN

Ordered subsets expectation maximization (OS-EM) is widely used to accelerate image reconstruction in single photon emission computed tomography (SPECT). Speedup of OS-EM over maximum likelihood expectation maximization (ML-EM) is close to the number of subsets used. Although a high number of subsets can shorten reconstruction times significantly, it can also cause severe image artifacts such as improper erasure of reconstructed activity if projections contain few counts. We recently showed that such artifacts can be prevented by using a count-regulated OS-EM (CR-OS-EM) algorithm which automatically adapts the number of subsets for each voxel based on the estimated number of counts that the voxel contributed to the projections. While CR-OS-EM reached high speed-up over ML-EM in high-activity regions of images, speed in low-activity regions could still be very slow. In this work we propose similarity-regulated OS-EM (SR-OS-EM) as a much faster alternative to CR-OS-EM. SR-OS-EM also automatically and locally adapts the number of subsets, but it uses a different criterion for subset regulation: the number of subsets that is used for updating an individual voxel depends on how similar the reconstruction algorithm would update the estimated activity in that voxel with different subsets. Reconstructions of an image quality phantom and in vivo scans show that SR-OS-EM retains all of the favorable properties of CR-OS-EM, while reconstruction speed can be up to an order of magnitude higher in low-activity regions. Moreover our results suggest that SR-OS-EM can be operated with identical reconstruction parameters (including the number of iterations) for a wide range of count levels, which can be an additional advantage from a user perspective since users would only have to post-filter an image to present it at an appropriate noise level.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada de Emisión de Fotón Único/métodos , Artefactos , Humanos , Fantasmas de Imagen
6.
Biomater Sci ; 4(8): 1202-11, 2016 Aug 19.
Artículo en Inglés | MEDLINE | ID: mdl-27286085

RESUMEN

Understanding how nanoparticle properties such as size, morphology and rigidity influence their circulation time and biodistribution is essential for the development of nanomedicine therapies. Herein we assess the influence of morphology on cellular internalization, in vivo biodistribution and circulation time of nanocarriers using polystyrene-b-poly(ethylene oxide) micelles of spherical or elongated morphology. The glassy nature of polystyrene guarantees the morphological stability of the carriers in vivo and by encapsulating Indium-111 in their core, an assessment of the longitudinal in vivo biodistribution of the particles in healthy mice is performed with single photon emission computed tomography imaging. Our results show prolonged blood circulation, longer than 24 hours, for all micelle morphologies studied. Dynamics of micelle accumulation in the liver and other organs of the reticuloendothelial system show a size-dependent nature and late stage liver clearance is observed for the elongated morphology. Apparent contradictions between recent similar studies can be resolved by considering the effects of flexibility and degradation of the elongated micelles on their circulation time and biodistribution.


Asunto(s)
Micelas , Polietilenglicoles/metabolismo , Poliestirenos/metabolismo , Tomografía Computarizada por Tomografía Computarizada de Emisión de Fotón Único , Animales , Circulación Sanguínea , Portadores de Fármacos/metabolismo , Estabilidad de Medicamentos , Células HeLa , Humanos , Radioisótopos de Indio , Hígado/metabolismo , Ratones , Ratones Endogámicos C57BL , Nanomedicina , Nanopartículas/metabolismo , Propiedades de Superficie , Distribución Tisular
7.
Phys Med Biol ; 50(6): 1265-72, 2005 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-15798321

RESUMEN

Statistical reconstruction methods offer possibilities of improving image quality as compared to analytical methods, but current reconstruction times prohibit routine clinical applications. To reduce reconstruction times we have parallelized a statistical reconstruction algorithm for cone-beam x-ray CT, the ordered subset convex algorithm (OSC), and evaluated it on a shared memory computer. Two different parallelization strategies were developed: one that employs parallelism by computing the work for all projections within a subset in parallel, and one that divides the total volume into parts and processes the work for each sub-volume in parallel. Both methods are used to reconstruct a three-dimensional mathematical phantom on two different grid densities. The reconstructed images are binary identical to the result of the serial (non-parallelized) algorithm. The speed-up factor equals approximately 30 when using 32 to 40 processors, and scales almost linearly with the number of cpus for both methods. The huge reduction in computation time allows us to apply statistical reconstruction to clinically relevant studies for the first time.


Asunto(s)
Algoritmos , Metodologías Computacionales , Imagenología Tridimensional/métodos , Almacenamiento y Recuperación de la Información/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Inteligencia Artificial , Simulación por Computador , Modelos Biológicos , Modelos Estadísticos , Intensificación de Imagen Radiográfica/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
8.
Phys Med Biol ; 50(4): 613-23, 2005 Feb 21.
Artículo en Inglés | MEDLINE | ID: mdl-15773623

RESUMEN

Statistical methods for image reconstruction such as maximum likelihood expectation maximization (ML-EM) are more robust and flexible than analytical inversion methods and allow for accurate modelling of the photon transport and noise. Statistical reconstruction is prohibitively slow when applied to clinical x-ray cone-beam CT due to the large data sets and the high number of iterations required for reconstructing high resolution images. One way to reduce the reconstruction time is to use ordered subsets of projections during the iterations, which has been successfully applied to fan-beam x-ray CT. In this paper, we quantitatively analyse the use of ordered subsets in concert with the convex algorithm for cone-beam x-ray CT reconstruction, for the case of circular acquisition orbits. We focus on the reconstructed image accuracy of a 3D head phantom. Acceleration factors larger than 300 were obtained with errors smaller than 1%, with the preservation of signal-to-noise ratio. Pushing the acceleration factor towards 600 by using an increasing number of subsets increases the reconstruction error up to 5% and significantly increases noise. The results indicate that the use of ordered subsets can be extremely useful for cone-beam x-ray CT.


Asunto(s)
Algoritmos , Encéfalo/diagnóstico por imagen , Imagenología Tridimensional/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Tomografía Computarizada Espiral/métodos , Inteligencia Artificial , Humanos , Reconocimiento de Normas Patrones Automatizadas/métodos , Fantasmas de Imagen , Intensificación de Imagen Radiográfica/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Tomografía Computarizada Espiral/instrumentación
9.
Nucl Med Biol ; 42(5): 465-469, 2015 May.
Artículo en Inglés | MEDLINE | ID: mdl-25662844

RESUMEN

INTRODUCTION: (188)Rhenium-HEDP is an effective bone-targeting therapeutic radiopharmaceutical, for treatment of osteoblastic bone metastases. It is known that the presence of carrier (non-radioactive rhenium as ammonium perrhenate) in the reaction mixture during labeling is a prerequisite for adequate bone affinity, but little is known about the optimal carrier concentration. METHODS: We investigated the influence of carrier concentration in the formulation on the radiochemical purity, in-vitro hydroxyapatite affinity and the in-vivo bone accumulation of (188)Rhenium-HEDP in mice. RESULTS: The carrier concentration influenced hydroxyapatite binding in-vitro as well as bone accumulation in-vivo. Variation in hydroxyapatite binding with various carrier concentrations seemed to be mainly driven by variation in radiochemical purity. The in-vivo bone accumulation appeared to be more complex: satisfactory radiochemical purity and hydroxyapatite affinity did not necessarily predict acceptable bio-distribution of (188)Rhenium-HEDP. CONCLUSIONS: For development of new bisphosphonate-based radiopharmaceuticals for clinical use, human administration should not be performed without previous animal bio-distribution experiments. Furthermore, our clinical formulation of (188)Rhenium-HEDP, containing 10 µmol carrier, showed excellent bone accumulation that was comparable to other bisphosphonate-based radiopharmaceuticals, with no apparent uptake in other organs. ADVANCES IN KNOWLEDGE: Radiochemical purity and in-vitro hydroxyapatite binding are not necessarily predictive of bone accumulation of (188)Rhenium-HEDP in-vivo. IMPLICATIONS FOR PATIENT CARE: The formulation for (188)Rhenium-HEDP as developed by us for clinical use exhibits excellent bone uptake and variation in carrier concentration during preparation of this radiopharmaceutical should be avoided.


Asunto(s)
Durapatita/química , Ácido Etidrónico/química , Radioquímica/métodos , Radioisótopos/química , Radiofármacos/química , Renio/química , Animales , Huesos/metabolismo , Durapatita/farmacocinética , Durapatita/uso terapéutico , Ácido Etidrónico/farmacocinética , Ácido Etidrónico/uso terapéutico , Masculino , Ratones , Ratones Endogámicos C57BL , Radiofármacos/farmacocinética , Radiofármacos/uso terapéutico , Distribución Tisular
10.
J Nucl Med ; 39(11): 1996-2003, 1998 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-9829597

RESUMEN

UNLABELLED: One type of SPECT system often used for simultaneous emission-transmission tomography is equipped with parallel-hole collimators, moving line sources (MLS) and electronic windows that move in synchrony with the sources. Although downscatter from the emission distribution is reduced by the use of the electronic window, this still can represent a sizable fraction of the transmitted counts. These systems have relatively poor spatial resolution and use costly transmission sources. METHODS: Using a two-head SPECT system, with heads at right angles, two 153Gd line sources (5800 MBq each) were replaced by two 153Gd point sources of only 750 MBq each and positioned to move along the focal lines of two half-fanbeam collimators. A suitable acquisition protocol for a moving point source (MPS) system was selected by considering the results of a simulation study. With this protocol, physical phantom experiments were conducted. RESULTS: Simulations showed that by using two half-fanbeam collimators, a gantry rotation of 90 degrees, such as used for 180 degrees acquisition with parallel-beam collimators for cardiac imaging, was insufficient. A gantry rotation of 180 degrees resulted in attenuation maps where only an area to the posterior of a 400-mm wide thorax phantom was affected by truncation. The MPS system had a 14.7 times higher sensitivity for transmission counts than the MLS system. Despite the smaller sources in the MPS system, the number of acquired transmission counts was a factor 1.91 times higher compared with the MLS system, resulting in reduced noise. The relative downscatter contribution from 99mTc (140 keV) in the 153Gd moving electronic window (100 keV) was reduced by a factor of 1.81. Transmission images of a rod phantom with segments containing acrylic rods of different diameters showed an improvement of resolution in favor of the MPS system from about 11 mm to about 6 mm (five instead of two segments of rods were clearly visible). In addition, the noise level in the MPS thorax transmission images was significantly lower. CONCLUSION: The MPS system has important advantages when compared with the MLS system. The use of low-activity point sources is economically beneficial when compared with line sources and reduces radiation exposure to staff and patients.


Asunto(s)
Tomografía Computarizada de Emisión de Fotón Único/instrumentación , Diseño de Equipo , Humanos , Fantasmas de Imagen , Dispersión de Radiación , Tomografía Computarizada de Emisión de Fotón Único/métodos
11.
Med Phys ; 26(11): 2311-22, 1999 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-10587212

RESUMEN

The point spread function (PSF) of a gamma camera describes the photon count density distribution at the detector surface when a point source is imaged. Knowledge of the PSF is important for computer simulation and accurate image reconstruction of single photon emission computed tomography (SPECT) images. To reduce the number of measurements required for PSF characterization and the amount of computer memory to store PSF tables, and to enable generalization of the PSF to different collimator-to-source distances, the PSF may be modeled as the two-dimensional (2D) convolution of the depth-dependent component which is free of detector blurring (PSF(ideal)) and the distance-dependent detector response. Owing to limitations imposed by the radioactive strength of point sources, extended sources have to be used for measurements. Therefore, if PSF(ideal) is estimated from measured responses, corrections have to be made for both the detector blurring and for the extent of the source. In this paper, an approach based on maximum likelihood expectation-maximization (ML-EM) is used to estimate PSF(ideal). In addition, a practical measurement procedure which avoids problems associated with commonly used line-source measurements is proposed. To decrease noise and to prevent nonphysical solutions, shape constraints are applied during the estimation of PSF(ideal). The estimates are generalized to depths other than those which have been measured and are incorporated in a SPECT simulator. The method is validated for Tc-99m and T1-201 by means of measurements on physical phantoms. The corrected responses have the desired shapes and simulated responses closely resemble measured responses. The proposed methodology may, consequently, serve as a basis for accurate three-dimensional (3D) SPECT reconstruction.


Asunto(s)
Aumento de la Imagen/métodos , Funciones de Verosimilitud , Dispersión de Radiación , Tomografía Computarizada de Emisión de Fotón Único/métodos , Algoritmos , Simulación por Computador , Modelos Teóricos , Fantasmas de Imagen , Tecnecio/química , Talio/química
12.
IEEE Trans Med Imaging ; 23(5): 584-90, 2004 May.
Artículo en Inglés | MEDLINE | ID: mdl-15147011

RESUMEN

Monte Carlo (MC) methods can accurately simulate scatter in X-ray imaging. However, when low noise scatter projections have to be simulated these MC simulations tend to be very time consuming. Rapid computation of scatter estimates is essential for several applications. The aim of the work presented in this paper is to speed up the estimation of noise-free scatter projections while maintaining their accuracy. Since X-ray scatter projections are often rather smooth, an approach is chosen whereby a short MC simulation is combined with a data fitting program that is robust to projection truncation and noise. This method allows us to estimate the smooth scatter projection rapidly. The speed-up and accuracy achieved by using the fitting algorithm were validated for the projection simulation of a small animal X-ray CT system. The acceleration that can be obtained over standard MC simulations is typically two orders of magnitude, depending on the accuracy required. The proposed approach may be useful for rapid simulation of patient and animal studies and for correction of the image-degrading effects of scatter in tomography.


Asunto(s)
Algoritmos , Cabeza/diagnóstico por imagen , Modelos Biológicos , Intensificación de Imagen Radiográfica/tendencias , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Dispersión de Radiación , Procesamiento de Señales Asistido por Computador , Tomografía Computarizada por Rayos X/métodos , Animales , Modelos Estadísticos , Método de Montecarlo , Fantasmas de Imagen , Ratas , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Tomografía Computarizada por Rayos X/instrumentación
13.
IEEE Trans Med Imaging ; 14(2): 271-82, 1995.
Artículo en Inglés | MEDLINE | ID: mdl-18215831

RESUMEN

A fast simulator of SPECT projection data taking into account attenuation, distance dependent detector response, and scatter has been developed, based on an analytical point spread function model. The parameters of the scatter response are obtained from a single line source measurement with a triangular phantom. The simulator is able to include effects of object curvature on the scatter response to a high accuracy. The simulator has been evaluated for homogeneous media by measurements of (99m)Tc point sources placed at different locations in a water-filled cylinder at energy windows of 15% and 20%. The asymmetrical shapes of measured projections of point sources are In excellent agreement with simulations for both energy windows. Scatter-to-primary ratio (SPR) calculations of point sources at different positions in a cylindrical phantom differ not more than a few percent from measurements. The simulator uses just a few megabytes of memory for storing the tables representing the forward model; furthermore, simulation of 60 SPECT projections from a three-dimensional digital brain phantom with 6-mm cubic voxels takes only ten minutes on a standard workstation. Therefore, the simulator could serve as a projector in iterative true 3-D SPECT reconstruction.

14.
IEEE Trans Med Imaging ; 17(6): 1101-5, 1998 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-10048870

RESUMEN

Iterative maximum likelihood (ML) transmission computed tomography algorithms have distinct advantages over Fourier-based reconstruction, but unfortunately require increased computation time. The convex algorithm [1] is a relatively fast iterative ML algorithm but it is nevertheless too slow for many applications. Therefore, an acceleration of this algorithm by using ordered subsets of projections is proposed [ordered subsets convex algorithm (OSC)]. OSC applies the convex algorithm sequentially to subsets of projections. OSC was compared with the convex algorithm using simulated and physical thorax phantom data. Reconstructions were performed for OSC using eight and 16 subsets (eight and four projections/subset, respectively). Global errors, image noise, contrast recovery, and likelihood increase were calculated. Results show that OSC is faster than the convex algorithm, the amount of acceleration being approximately proportional to the number of subsets in OSC, and it causes only a slight increase of noise and global errors in the reconstructions. Images and image profiles of the reconstructions were in good agreement. In conclusion, OSC and the convex algorithm result in similar image quality but OSC is more than an order of magnitude faster.


Asunto(s)
Algoritmos , Tomografía Computarizada por Rayos X/métodos , Humanos , Fantasmas de Imagen/estadística & datos numéricos , Radiografía Torácica/métodos , Radiografía Torácica/estadística & datos numéricos , Tomografía Computarizada por Rayos X/estadística & datos numéricos
15.
IEEE Trans Med Imaging ; 15(4): 491-9, 1996.
Artículo en Inglés | MEDLINE | ID: mdl-18215930

RESUMEN

The quality and quantitative accuracy of iteratively reconstructed SPECT images improves when better point spread function (PSF) models of the gamma camera are used during reconstruction. Here, inclusion in the PSF model of photon crosstalk between different slices caused by limited gamma camera resolution and scatter is examined. A three-dimensional (3-D) projector back-projector (proback) has been developed which models both the distance dependent detector point spread function and the object shape-dependent scatter point spread function of single photon emission computed tomography (SPECT). A table occupying only a few megabytes of memory is sufficient to represent this scatter model. The contents of this table are obtained by evaluating an analytical expression for object shape-dependent scatter. The proposed approach avoids the huge memory requirements of storing the full transition matrix needed for 3-D reconstruction including object shape-dependent scatter. In addition, the method avoids the need for lengthy Monte Carlo simulations to generate such a matrix. In order to assess the quantitative accuracy of the method, reconstructions of a water filled cylinder containing regions of different activity levels and of simulated 3-D brain projection data have been evaluated for technetium-99m. It is shown that fully 3-D reconstruction including complete detector response and object shape-dependent scatter modeling clearly outperforms simpler methods that lack a complete detector response and/or a complete scatter response model. Fully 3-D scatter correction yields the best quantitation of volumes of interest and the best contrast-to-noise curves.

16.
Phys Med Biol ; 46(3): 621-35, 2001 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-11277213

RESUMEN

A rotation-based Monte Carlo (MC) simulation method (RMC) has been developed, designed for rapid calculation of downscatter through non-uniform media in SPECT. A possible application is downscatter correction in dual isotope SPECT. With RMC, only a fraction of all projections of a SPECT study have to be MC simulated in a standard manner. The other projections can be estimated rapidly using the results of these standard MC calculations. For efficiency, approximations have to be made in RMC with regard to the final scatter angle of the detected photons. Further speed-up is obtained by combining RMC with convolution-based forced detection (CFD) instead of forced detection (FD), which is a more common variance reduction technique for MC. The RMC method was compared with standard MC for 99mTc downscatter in a 201Tl window (72 keV+/-10%) using a digital thorax phantom. The resulting scatter projections are in good agreement (maximum bias a few per cent of the largest value in the projection), but RMC with CFD is about three orders in magnitude faster than standard MC with FD and up to 25 times faster than standard MC with CFD. Using RMC combined with CFD, the generation of 64 almost noise-free downscatter projections (64 x 64) takes only a couple of minutes on a 500 MHz Pentium processor. Therefore, rotation-based Monte Carlo could serve as a practical tool for downscatter correction schemes in dual isotope SPECT.


Asunto(s)
Fantasmas de Imagen , Tomografía Computarizada de Emisión de Fotón Único , Humanos , Procesamiento de Imagen Asistido por Computador , Método de Montecarlo , Radiografía Torácica , Reproducibilidad de los Resultados , Dispersión de Radiación , Tecnecio , Radioisótopos de Talio
17.
Phys Med Biol ; 42(8): 1619-32, 1997 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-9279910

RESUMEN

Effects of different scatter compensation methods incorporated in fully 3D iterative reconstruction are investigated. The methods are: (i) the inclusion of an 'ideal scatter estimate' (ISE); (ii) like (i) but with a noiseless scatter estimate (ISE-NF); (iii) incorporation of scatter in the point spread function during iterative reconstruction ('ideal scatter model', ISM); (iv) no scatter compensation (NSC); (v) ideal scatter rejection (ISR), as can be approximated by using a camera with a perfect energy resolution. The iterative method used was an ordered subset expectation maximization (OS-EM) algorithm. A cylinder containing small cold spheres was used to calculate contrast-to-noise curves. For a brain study, global errors between reconstruction and 'true' distributions were calculated. Results show that ISR is superior to all other methods. In all cases considered, ISM is superior to ISE and performs approximately as well as (brain study) or better than (cylinder data) ISE-NF. Both ISM and ISE improve contrast-to-noise curves and reduce global errors, compared with NSC. In the case of ISE, blurring of the scatter estimate with a Gaussian kernel results in slightly reduced errors in brain studies, especially at low count levels. The optimal Gaussian kernel size is strongly dependent on the noise level.


Asunto(s)
Encéfalo/diagnóstico por imagen , Fantasmas de Imagen , Tomografía Computarizada de Emisión de Fotón Único/métodos , Algoritmos , Humanos , Reproducibilidad de los Resultados , Dispersión de Radiación
18.
Phys Med Biol ; 49(18): 4321-33, 2004 Sep 21.
Artículo en Inglés | MEDLINE | ID: mdl-15509068

RESUMEN

We describe a newly developed, accelerated Monte Carlo simulator of a small animal micro-CT scanner. Transmission measurements using aluminium slabs are employed to estimate the spectrum of the x-ray source. The simulator incorporating this spectrum is validated with micro-CT scans of physical water phantoms of various diameters, some containing stainless steel and Teflon rods. Good agreement is found between simulated and real data: normalized error of simulated projections, as compared to the real ones, is typically smaller than 0.05. Also the reconstructions obtained from simulated and real data are found to be similar. Thereafter, effects of scatter are studied using a voxelized software phantom representing a rat body. It is shown that the scatter fraction can reach tens of per cents in specific areas of the body and therefore scatter can significantly affect quantitative accuracy in small animal CT imaging.


Asunto(s)
Algoritmos , Análisis de Falla de Equipo/métodos , Modelos Biológicos , Método de Montecarlo , Intensificación de Imagen Radiográfica/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Tomografía Computarizada Espiral/métodos , Animales , Artefactos , Simulación por Computador , Miniaturización/instrumentación , Fantasmas de Imagen , Radiografía Abdominal/métodos , Ratas , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Tomografía Computarizada Espiral/instrumentación , Tomografía Computarizada Espiral/veterinaria
19.
Phys Med Biol ; 43(6): 1713-30, 1998 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-9651035

RESUMEN

Iterative reconstruction from single photon emission computed tomography (SPECT) data requires regularization to avoid noise amplification and edge artefacts in the reconstructed image. This is often accomplished by stopping the iteration process at a relatively low number of iterations or by post-filtering the reconstructed image. The aim of this paper is to develop a method to automatically select an optimal combination of stopping iteration number and filters for a particular imaging situation. To this end different error measures between the distribution of a phantom and a corresponding filtered SPECT image are minimized for different iteration numbers. As a study example, simulated data representing a brain study are used. For post-reconstruction filtering, the performance of 3D linear diffusion (Gaussian filtering) and edge preserving 3D nonlinear diffusion (Catté scheme) is investigated. For reconstruction methods which model the image formation process accurately, error measures between the phantom and the filtered reconstruction are significantly reduced by performing a high number of iterations followed by optimal filtering compared with stopping the iterative process early. Furthermore, this error reduction can be obtained over a wide range of iteration numbers. Only a negligibly small additional reduction of the errors is obtained by including spatial variance in the filter kernel. Compared with Gaussian filtering, Catté diffusion can further reduce the error in some cases. For the examples considered, using accurate image formation models during iterative reconstruction is far more important than the choice of the filter.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/estadística & datos numéricos , Tomografía Computarizada de Emisión de Fotón Único/estadística & datos numéricos , Fenómenos Biofísicos , Biofisica , Encéfalo/diagnóstico por imagen , Humanos , Funciones de Verosimilitud , Modelos Lineales , Dinámicas no Lineales , Fantasmas de Imagen
20.
Phys Med Biol ; 44(8): N183-92, 1999 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-10473218

RESUMEN

Accurate simulation of scatter in projection data of single photon emission computed tomography (SPECT) is computationally extremely demanding for activity distribution in non-uniform dense media. This paper suggests how the computation time and memory requirements can be significantly reduced. First the scatter projection of a uniform dense object (P(SDSE)) is calculated using a previously developed accurate and fast method which includes all orders of scatter (slab-derived scatter estimation), and then P(SDSE) is transformed towards the desired projection P which is based on the non-uniform object. The transform of P(SDSE) is based on two first-order Compton scatter Monte Carlo (MC) simulated projections. One is based on the uniform object (P(u)) and the other on the object with non-uniformities (P(nu)). P is estimated by P = P(SDSE) P(nu)/P(u). A tremendous decrease in noise in P is achieved by tracking photon paths for P(nu) identical to those which were tracked for the calculation of P(u) and by using analytical rather than stochastic modelling of the collimator. The method was validated by comparing the results with standard MC-simulated scatter projections (P) of 99mTc and 201Tl point sources in a digital thorax phantom. After correction, excellent agreement was obtained between P and P. The total computation time required to calculate an accurate scatter projection of an extended distribution in a thorax phantom on a PC is a only few tens of seconds per projection, which makes the method attractive for application in accurate scatter correction in clinical SPECT. Furthermore, the method removes the need of excessive computer memory involved with previously proposed 3D model-based scatter correction methods.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Modelos Teóricos , Método de Montecarlo , Dispersión de Radiación , Tomografía Computarizada de Emisión de Fotón Único/métodos , Simulación por Computador , Fotones
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA