Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
Eur J Nucl Med Mol Imaging ; 36(12): 1994-2001, 2009 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-19526237

RESUMEN

PURPOSE: The aim of this study is to optimize different parameters in the time-of-flight (TOF) reconstruction for the Philips GEMINI TF. The use of TOF in iterative reconstruction introduces additional variables to be optimized compared to conventional PET reconstruction. The different parameters studied are the TOF kernel width, the kernel truncation (used to reduce reconstruction time) and the scatter correction method. METHODS: These parameters are optimized using measured phantom studies. All phantom studies were acquired with a very high number of counts to limit the effects of noise. A high number of iterations (33 subsets and 3 iterations) was used to reach convergence. The figures of merit are the uniformity in the background, the cold spot recovery and the hot spot contrast. As reference results we used the non-TOF reconstruction of the same data sets. RESULTS: It is shown that contrast recovery loss can only be avoided if the kernel is extended to more than 3 standard deviations. To obtain uniform reconstructions the recommended scatter correction is TOF single scatter simulation (SSS). This also leads to improved cold spot recovery and hot spot contrast. While the daily measurements of the system show a timing resolution in the range of 590­600 ps, the optimal reconstructions are obtained with a TOF kernel full-width at half-maximum (FWHM) of 650­700 ps. The optimal kernel width seems to be less critical for the recovered contrast but has an important effect on the background uniformity. Using smaller or wider kernels results in a less uniform background and reduced hot and cold contrast recovery. CONCLUSION: The different parameters studied have a large effect on the quantitative accuracy of the reconstructed images. The optimal settings from this study can be used as a guideline to make an objective comparison of the gains obtained with TOF PET versus PET reconstruction.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Tomografía de Emisión de Positrones/métodos , Humanos , Procesamiento de Imagen Asistido por Computador/instrumentación , Tomografía de Emisión de Positrones/instrumentación , Dispersión de Radiación , Factores de Tiempo
2.
Med Phys ; 36(4): 1053-60, 2009 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-19472610

RESUMEN

The GEANT4 application for tomographic emission (GATE) is one of the most detailed Monte Carlo simulation tools for SPECT and PET. It allows for realistic phantoms, complex decay schemes, and a large variety of detector geometries. However, only a fraction of the information in each particle history is available for postprocessing. In order to extend the analysis capabilities of GATE, a flexible framework was developed. This framework allows all detected events to be subdivided according to their type: In PET, true coincidences from others, and in SPECT, geometrically collimated photons from others. The framework of the authors can be applied to any isotope, phantom, and detector geometry available in GATE. It is designed to enhance the usability of GATE for the study of contamination and for the investigation of the properties of current and future prototype detectors. The authors apply the framework to a case study of Bexxar, first assuming labeling with 124I, then with 131I. It is shown that with 124I PET, results with an optimized window improve upon those with the standard window but achieve less than half of the ideal improvement. Nevertheless, 124I PET shows improved resolution compared to 131I SPECT with triple-energy-window scatter correction.


Asunto(s)
Tomografía de Emisión de Positrones/métodos , Tomografía Computarizada de Emisión de Fotón Único/métodos , Simulación por Computador , Humanos , Radioisótopos de Yodo/química , Riñón/diagnóstico por imagen , Método de Montecarlo , Fantasmas de Imagen , Fotones , Física/métodos , Tomografía de Emisión de Positrones/instrumentación , Radioisótopos/química , Radiometría/métodos , Dispersión de Radiación , Programas Informáticos , Tórax/metabolismo , Tomografía Computarizada de Emisión de Fotón Único/instrumentación
3.
Phys Med Biol ; 54(3): 715-29, 2009 Feb 07.
Artículo en Inglés | MEDLINE | ID: mdl-19131666

RESUMEN

As an alternative to the use of traditional parallel hole collimators, SPECT imaging can be performed using rotating slat collimators. While maintaining the spatial resolution, a gain in image quality could be expected from the higher photon collection efficiency of this type of collimator. However, the use of iterative methods to do fully three-dimensional (3D) reconstruction is computationally much more expensive and furthermore involves slow convergence compared to a classical SPECT reconstruction. It has been proposed to do 3D reconstruction by splitting the system matrix into two separate matrices, forcing the reconstruction to first estimate the sinograms from the rotating slat SPECT data before estimating the image. While alleviating the computational load by one order of magnitude, this split matrix approach would result in fast computation of the projections in an iterative algorithm, but does not solve the problem of slow convergence. There is thus a need for an algorithm which speeds up convergence while maintaining image quality for rotating slat collimated SPECT cameras. Therefore, we developed a reconstruction algorithm based on the split matrix approach which allows both a fast calculation of the forward and backward projection and a fast convergence. In this work, an algorithm of the maximum likelihood expectation maximization (MLEM) type, obtained from a split system matrix MLEM reconstruction, is proposed as a reconstruction method for rotating slat collimated SPECT data. Here, we compare this new algorithm to the conventional split system matrix MLEM method and to a gold standard fully 3D MLEM reconstruction algorithm on the basis of computational load, convergence and contrast-to-noise. Furthermore, ordered subsets expectation maximization (OSEM) implementations of these three algorithms are compared. Calculation of computational load and convergence for the different algorithms shows a speedup for the new method of 38 and 426 compared to the split matrix MLEM approach and the fully 3D MLEM respectively and a speedup of 16 and 21 compared to the split matrix OSEM and the fully 3D OSEM respectively. A contrast-to-noise study based on simulated data shows that our new approach has comparable accuracy as the fully 3D reconstruction method. The algorithm developed in this study allows iterative image reconstruction of rotating slat collimated SPECT data with equal image quality in a comparable amount of computation time as a classical SPECT reconstruction.


Asunto(s)
Algoritmos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Almacenamiento y Recuperación de la Información/métodos , Tomografía Computarizada de Emisión de Fotón Único/instrumentación , Tomografía Computarizada de Emisión de Fotón Único/métodos , Diseño de Equipo , Análisis de Falla de Equipo , Humanos , Fantasmas de Imagen , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
4.
Phys Med Biol ; 54(6): 1673-89, 2009 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-19242052

RESUMEN

The simultaneous recording of electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) can give new insights into how the brain functions. However, the strong electromagnetic field of the MR scanner generates artifacts that obscure the EEG and diminish its readability. Among them, the ballistocardiographic artifact (BCGa) that appears on the EEG is believed to be related to blood flow in scalp arteries leading to electrode movements. Average artifact subtraction (AAS) techniques, used to remove the BCGa, assume a deterministic nature of the artifact. This assumption may be too strong, considering the blood flow related nature of the phenomenon. In this work we propose a new method, based on canonical correlation analysis (CCA) and blind source separation (BSS) techniques, to reduce the BCGa from simultaneously recorded EEG-fMRI. We optimized the method to reduce the user's interaction to a minimum. When tested on six subjects, recorded in 1.5 T or 3 T, the average artifact extracted with BSS-CCA and AAS did not show significant differences, proving the absence of systematic errors. On the other hand, when compared on the basis of intra-subject variability, we found significant differences and better performance of the proposed method with respect to AAS. We demonstrated that our method deals with the intrinsic subject variability specific to the artifact that may cause averaging techniques to fail.


Asunto(s)
Artefactos , Balistocardiografía , Electroencefalografía/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Humanos
5.
J Magn Reson ; 190(2): 189-99, 2008 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-18023218

RESUMEN

Diffusion weighted magnetic resonance imaging enables the visualization of fibrous tissues such as brain white matter. The validation of this non-invasive technique requires phantoms with a well-known structure and diffusion behavior. This paper presents anisotropic diffusion phantoms consisting of parallel fibers. The diffusion properties of the fiber phantoms are measured using diffusion weighted magnetic resonance imaging and bulk NMR measurements. To enable quantitative evaluation of the measurements, the diffusion in the interstitial space between fibers is modeled using Monte Carlo simulations of random walkers. The time-dependent apparent diffusion coefficient and kurtosis, quantifying the deviation from a Gaussian diffusion profile, are simulated in 3D geometries of parallel fibers with varying packing geometries and packing densities. The simulated diffusion coefficients are compared to the theory of diffusion in porous media, showing a good agreement. Based on the correspondence between simulations and experimental measurements, the fiber phantoms are shown to be useful for the quantitative validation of diffusion imaging on clinical MRI-scanners.


Asunto(s)
Mapeo Encefálico/métodos , Imagen de Difusión por Resonancia Magnética/métodos , Fantasmas de Imagen , Algoritmos , Anisotropía , Agua Corporal/metabolismo , Simulación por Computador , Procesamiento de Imagen Asistido por Computador , Imagenología Tridimensional , Método de Montecarlo , Fibras Nerviosas
6.
Clin Neurophysiol ; 119(8): 1756-1770, 2008 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-18499517

RESUMEN

OBJECTIVE: Methods for the detection of epileptiform events can be broadly divided into two main categories: temporal detection methods that exploit the EEG's temporal characteristics, and spatial detection methods that base detection on the results of an implicit or explicit source analysis. We describe how the framework of a spatial detection method was extended to improve its performance by including temporal information. This results in a method that provides (i) automated localization of an epileptogenic focus and (ii) detection of focal epileptiform events in an EEG recording. For the detection, only one threshold value needs to be set. METHODS: The method comprises five consecutive steps: (1) dipole source analysis in a moving window, (2) automatic selection of focal brain activity, (3) dipole clustering to arrive at the identification of the epileptiform cluster, (4) derivation of a spatio-temporal template of the epileptiform activity, and (5) template matching. Routine EEG recordings from eight paediatric patients with focal epilepsy were labelled independently by two experts. The method was evaluated in terms of (i) ability to identify the epileptic focus, (ii) validity of the derived template, and (iii) detection performance. The clustering performance was evaluated using a leave-one-out cross validation. Detection performance was evaluated using Precision-Recall curves and compared to the performance of two temporal (mimetic and wavelet based) and one spatial (dipole analysis based) detection methods. RESULTS: The method succeeded in identifying the epileptogenic focus in seven of the eight recordings. For these recordings, the mean distance between the epileptic focus estimated by the method and the region indicated by the labelling of the experts was 8mm. Except for two EEG recordings where the dipole clustering step failed, the derived template corresponded to the epileptiform activity marked by the experts. Over the eight EEGs, the method showed a mean sensitivity and selectivity of 92 and 77%, respectively. CONCLUSIONS: The method allows automated localization of the epileptogenic focus and shows good agreement with the region indicated by the labelling of the experts. If the dipole clustering step is successful, the method allows a detection of the focal epileptiform events, and gave a detection performance comparable or better to that of the other methods. SIGNIFICANCE: The identification and quantification of epileptiform events is of considerable importance in the diagnosis of epilepsy. Our method allows the automatic identification of the epileptic focus, which is of value in epilepsy surgery. The method can also be used as an offline exploration tool for focal EEG activity, displaying the dipole clusters and corresponding time series.


Asunto(s)
Mapeo Encefálico , Encéfalo/fisiopatología , Electroencefalografía , Epilepsias Parciales/fisiopatología , Algoritmos , Niño , Preescolar , Análisis por Conglomerados , Electrodos , Epilepsias Parciales/patología , Femenino , Humanos , Masculino , Reproducibilidad de los Resultados , Procesamiento de Señales Asistido por Computador , Factores de Tiempo
7.
Med Phys ; 35(4): 1476-85, 2008 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-18491542

RESUMEN

Geant4 application for tomographic emission (GATE) is a geometry and tracking 4 (Geant4) application toolkit for accurate simulation of positron emission tomography and single photon emission computed tomography (SPECT) scanners. GATE simulations with realistic count levels are very CPU-intensive as they take up to several days with single-CPU computers. Therefore, we implemented both standard forced detection (FD) and convolution-based forced detection (CFD) with multiple projection sampling, which allows the simulation of all projections simultaneously in GATE. In addition, a FD and CFD specialized Geant4 navigator was developed to overcome the detailed but slow tracking algorithms in Geant4. This article is focused on the implementation and validation of these aforementioned developments. The results show a good agreement between the FD and CFD versus analog GATE simulations for Tc-99m SPECT. These combined developments accelerate GATE by three orders of magnitude in the case of FD. CFD is an additional two orders of magnitude faster than FD. This renders realistic simulations feasible within minutes on a single CPU. Future work will extend our framework to higher energy isotopes, which will require the inclusion of a septal penetration and collimator scatter model.


Asunto(s)
Algoritmos , Interpretación de Imagen Asistida por Computador/métodos , Programas Informáticos , Tomografía Computarizada de Emisión de Fotón Único/métodos , Aumento de la Imagen/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Factores de Tiempo
8.
Phys Med Biol ; 53(7): 1989-2002, 2008 Apr 07.
Artículo en Inglés | MEDLINE | ID: mdl-18356576

RESUMEN

The main remaining challenge for a gamma camera is to overcome the existing trade-off between collimator spatial resolution and system sensitivity. This problem, strongly limiting the performance of parallel hole collimated gamma cameras, can be overcome by applying new collimator designs such as rotating slat (RS) collimators which have a much higher photon collection efficiency. The drawback of a RS collimated gamma camera is that, even for obtaining planar images, image reconstruction is needed, resulting in noise accumulation. However, nowadays iterative reconstruction techniques with accurate system modeling can provide better image quality. Because the impact of this modeling on image quality differs from one system to another, an objective assessment of the image quality obtained with a RS collimator is needed in comparison to classical projection images obtained using a parallel hole (PH) collimator. In this paper, a comparative study of image quality, achieved with system modeling, is presented. RS data are reconstructed to planar images using maximum likelihood expectation maximization (MLEM) with an accurate Monte Carlo derived system matrix while PH projections are deconvolved using a Monte Carlo derived point-spread function. Contrast-to-noise characteristics are used to show image quality for cold and hot spots of varying size. Influence of the object size and contrast is investigated using the optimal contrast-to-noise ratio (CNR(o)). For a typical phantom setup, results show that cold spot imaging is slightly better for a PH collimator. For hot spot imaging, the CNR(o) of the RS images is found to increase with increasing lesion diameter and lesion contrast while it decreases when background dimensions become larger. Only for very large background dimensions in combination with low contrast lesions, the use of a PH collimator could be beneficial for hot spot imaging. In all other cases, the RS collimator scores better. Finally, the simulation of a planar bone scan on a RS collimator revealed a hot spot contrast improvement up to 54% compared to a classical PH bone scan.


Asunto(s)
Cámaras gamma , Interpretación de Imagen Asistida por Computador/métodos , Algoritmos , Computadores , Humanos , Procesamiento de Imagen Asistido por Computador , Funciones de Verosimilitud , Modelos Estadísticos , Modelos Teóricos , Método de Montecarlo , Metástasis de la Neoplasia , Neoplasias/patología , Fantasmas de Imagen , Programas Informáticos , Tomografía Computarizada por Rayos X/métodos
9.
Phys Med Biol ; 53(7): 1877-94, 2008 Apr 07.
Artículo en Inglés | MEDLINE | ID: mdl-18364544

RESUMEN

To improve the EEG source localization in the brain, the conductivities used in the head model play a very important role. In this study, we focus on the modeling of the anisotropic conductivity of the white matter. The anisotropic conductivity profile can be derived from diffusion weighted magnetic resonance images (DW-MRI). However, deriving these anisotropic conductivities from diffusion weighted MR images of the white matter is not straightforward. In the literature, two methods can be found for calculating the conductivity from the diffusion weighted images. One method uses a fixed value for the ratio of the conductivity in different directions, while the other method uses a conductivity profile obtained from a linear scaling of the diffusion ellipsoid. We propose a model which can be used to derive the conductivity profile from the diffusion tensor images. This model is based on the variable anisotropic ratio throughout the white matter and is a combination of the linear relationship as stated in the literature, with a constraint on the magnitude of the conductivity tensor (also known as the volume constraint). This approach is stated in the paper as approach A. In our study we want to investigate dipole estimation differences due to using a more simplified model for white matter anisotropy (approach B), while the electrode potentials are derived using a head model with a more realistic approach for the white matter anisotropy (approach A). We used a realistic head model, in which the forward problem was solved using a finite difference method that can incorporate anisotropic conductivities. As error measures we considered the dipole location error and the dipole orientation error. The results show that the dipole location errors are all below 10 mm and have an average of 4 mm in gray matter regions. The dipole orientation errors ranged up to 66.4 degrees, and had a mean of, on average, 11.6 degrees in gray matter regions. In a qualitative manner, the results show that the orientation and location error is dependent on the orientation of the test dipole. The location error is larger when the orientation of the test dipole is similar to the orientation of the anisotropy, while the orientation error is larger when the orientation of the test dipole deviates from the orientation of the anisotropy. From these results, we can conclude that the modeling of white matter anisotropy plays an important role in EEG source localization. More specifically, accurate source localization will require an accurate modeling of the white matter conductivity profile in each voxel.


Asunto(s)
Electroencefalografía/métodos , Algoritmos , Animales , Anisotropía , Encéfalo/patología , Mapeo Encefálico/métodos , Simulación por Computador , Difusión , Electrodos , Electroencefalografía/instrumentación , Diseño de Equipo , Humanos , Modelos Biológicos , Neuronas/metabolismo , Reproducibilidad de los Resultados , Procesamiento de Señales Asistido por Computador
10.
Phys Med Biol ; 53(19): 5405-19, 2008 Oct 07.
Artículo en Inglés | MEDLINE | ID: mdl-18765890

RESUMEN

Diffusion weighted magnetic resonance imaging offers a non-invasive tool to explore the three-dimensional structure of brain white matter in clinical practice. Anisotropic diffusion hardware phantoms are useful for the quantitative validation of this technique. This study provides guidelines on how to manufacture anisotropic fibre phantoms in a reproducible way and which fibre material to choose to obtain a good quality of the diffusion weighted images. Several fibre materials are compared regarding their effect on the diffusion MR measurements of the water molecules inside the phantoms. The diffusion anisotropy influencing material properties are the fibre density and diameter, while the fibre surface relaxivity and magnetic susceptibility determine the signal-to-noise ratio. The effect on the T(2)-relaxation time of water in the phantoms has been modelled and the diffusion behaviour inside the fibre phantoms has been quantitatively evaluated using Monte Carlo random walk simulations.


Asunto(s)
Imagen de Difusión por Resonancia Magnética/métodos , Difusión , Fantasmas de Imagen , Anisotropía , Imagen de Difusión por Resonancia Magnética/instrumentación , Magnetismo , Reproducibilidad de los Resultados , Propiedades de Superficie , Factores de Tiempo , Agua/química
11.
Med Phys ; 34(5): 1766-78, 2007 May.
Artículo en Inglés | MEDLINE | ID: mdl-17555258

RESUMEN

The use of a temporal B-spline basis for the reconstruction of dynamic positron emission tomography data was investigated. Maximum likelihood (ML) reconstructions using an expectation maximization framework and maximum A-posteriori (MAP) reconstructions using the generalized expectation maximization framework were evaluated. Different parameters of the B-spline basis of such as order, number of basis functions and knot placing were investigated in a reconstruction task using simulated dynamic list-mode data. We found that a higher order basis reduced both the bias and variance. Using a higher number of basis functions in the modeling of the time activity curves (TACs) allowed the algorithm to model faster changes of the TACs, however, the TACs became noisier. We have compared ML, Gaussian postsmoothed ML and MAP reconstructions. The noise level in the ML reconstructions was controlled by varying the number of basis functions. The MAP algorithm penalized the integrated squared curvature of the reconstructed TAC. The postsmoothed ML was always outperformed in terms of bias and variance properties by the MAP and ML reconstructions. A simple adaptive knot placing strategy was also developed and evaluated. It is based on an arc length redistribution scheme during the reconstruction. The free knot reconstruction allowed a more accurate reconstruction while reducing the noise level especially for fast changing TACs such as blood input functions. Limiting the number of temporal basis functions combined with the adaptive knot placing strategy is in this case advantageous for regularization purposes when compared to the other regularization techniques.


Asunto(s)
Interpretación de Imagen Asistida por Computador , Reconocimiento de Normas Patrones Automatizadas , Tomografía de Emisión de Positrones/métodos , Intensificación de Imagen Radiográfica/métodos , Funciones de Verosimilitud , Fantasmas de Imagen , Técnica de Sustracción
12.
Med Phys ; 34(6): 1926-33, 2007 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-17654895

RESUMEN

Geometry and tracking (GEANT4) is a Monte Carlo package designed for high energy physics experiments. It is used as the basis layer for Monte Carlo simulations of nuclear medicine acquisition systems in GEANT4 Application for Tomographic Emission (GATE). GATE allows the user to realistically model experiments using accurate physics models and time synchronization for detector movement through a script language contained in a macro file. The downside of this high accuracy is long computation time. This paper describes a platform independent computing approach for running GATE simulations on a cluster of computers in order to reduce the overall simulation time. Our software automatically creates fully resolved, nonparametrized macros accompanied with an on-the-fly generated cluster specific submit file used to launch the simulations. The scalability of GATE simulations on a cluster is investigated for two imaging modalities, positron emission tomography (PET) and single photon emission computed tomography (SPECT). Due to a higher sensitivity, PET simulations are characterized by relatively high data output rates that create rather large output files. SPECT simulations, on the other hand, have lower data output rates but require a long collimator setup time. Both of these characteristics hamper scalability as a function of the number of CPUs. The scalability of PET simulations is improved here by the development of a fast output merger. The scalability of SPECT simulations is improved by greatly reducing the collimator setup time. Accordingly, these two new developments result in higher scalability for both PET and SPECT simulations and reduce the computation time to more practical values.


Asunto(s)
Redes de Comunicación de Computadores , Metodologías Computacionales , Interpretación de Imagen Asistida por Computador/métodos , Modelos Biológicos , Procesamiento de Señales Asistido por Computador , Programas Informáticos , Tomografía Computarizada de Emisión/métodos , Algoritmos , Simulación por Computador , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
13.
Phys Med Biol ; 52(23): 6781-94, 2007 Dec 07.
Artículo en Inglés | MEDLINE | ID: mdl-18029975

RESUMEN

Carnosine has been shown to be present in the skeletal muscle and in the brain of a variety of animals and humans. Despite the various physiological functions assigned to this metabolite, its exact role remains unclear. It has been suggested that carnosine plays a role in buffering in the intracellular physiological pHi range in skeletal muscle as a result of accepting hydrogen ions released in the development of fatigue during intensive exercise. It is thus postulated that the concentration of carnosine is an indicator for the extent of the buffering capacity. However, the determination of the concentration of this metabolite has only been performed by means of muscle biopsy, which is an invasive procedure. In this paper, we utilized proton magnetic resonance spectroscopy (1H MRS) in order to perform absolute quantification of carnosine in vivo non-invasively. The method was verified by phantom experiments and in vivo measurements in the calf muscles of athletes and untrained volunteers. The measured mean concentrations in the soleus and the gastrocnemius muscles were found to be 2.81 +/- 0.57/4.8 +/- 1.59 mM (mean +/- SD) for athletes and 2.58 +/- 0.65/3.3 +/- 0.32 mM for untrained volunteers, respectively. These values are in agreement with previously reported biopsy-based results. Our results suggest that 1H MRS can provide an alternative method for non-invasively determining carnosine concentration in human calf muscle in vivo.


Asunto(s)
Algoritmos , Carnosina/análisis , Espectroscopía de Resonancia Magnética/métodos , Músculo Esquelético/metabolismo , Humanos , Protones , Muslo , Distribución Tisular
14.
Cancer Biother Radiopharm ; 22(3): 423-30, 2007 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-17651050

RESUMEN

I-131 is a frequently used isotope for radionuclide therapy. This technique for cancer treatment requires a pre-therapeutic dosimetric study. The latter is usually performed (for this radionuclide) by directly imaging the uptake of the therapeutic radionuclide in the body or by replacing it by one of its isotopes, which are more suitable for imaging. This study aimed to compare the image quality that can be achieved by three iodine isotopes: I-131 and I-123 for single-photon emission computed tomography imaging, and I-124 for positron emission tomography imaging. The imaging characteristics of each isotope were investigated by simulated data. Their spectrums, point-spread functions, and contrast-recovery curves were drawn and compared. I-131 was imaged with a high-energy all-purpose (HEAP) collimator, whereas two collimators were compared for I-123: low-energy high-resolution (LEHR) and medium energy (ME). No mechanical collimation was used for I-124. The influence of small high-energy peaks (>0.1%) on the main energy window contamination were evaluated. Furthermore, the effect of a scattering medium was investigated and the triple energy window (TEW) correction was used for spectral-based scatter correction. Results showed that I-123 gave the best results with a LEHR collimator when the scatter correction was applied. Without correction, the ME collimator reduced the effects of high-energy contamination. I-131 offered the worst results. This can be explained by the large amount of septal penetration from the photopeak and by the collimator, which gave a low spatial resolution. I-124 gave the best imaging properties owing to its electronic collimation (high sensitivity) and a short coincidence time window.


Asunto(s)
Radioisótopos de Yodo , Simulación por Computador , Humanos , Procesamiento de Imagen Asistido por Computador , Radioisótopos de Yodo/clasificación , Peso Molecular , Fantasmas de Imagen , Sensibilidad y Especificidad , Tomografía/métodos
15.
J Neuroeng Rehabil ; 4: 46, 2007 Nov 30.
Artículo en Inglés | MEDLINE | ID: mdl-18053144

RESUMEN

BACKGROUND: The aim of electroencephalogram (EEG) source localization is to find the brain areas responsible for EEG waves of interest. It consists of solving forward and inverse problems. The forward problem is solved by starting from a given electrical source and calculating the potentials at the electrodes. These evaluations are necessary to solve the inverse problem which is defined as finding brain sources which are responsible for the measured potentials at the EEG electrodes. METHODS: While other reviews give an extensive summary of the both forward and inverse problem, this review article focuses on different aspects of solving the forward problem and it is intended for newcomers in this research field. RESULTS: It starts with focusing on the generators of the EEG: the post-synaptic potentials in the apical dendrites of pyramidal neurons. These cells generate an extracellular current which can be modeled by Poisson's differential equation, and Neumann and Dirichlet boundary conditions. The compartments in which these currents flow can be anisotropic (e.g. skull and white matter). In a three-shell spherical head model an analytical expression exists to solve the forward problem. During the last two decades researchers have tried to solve Poisson's equation in a realistically shaped head model obtained from 3D medical images, which requires numerical methods. The following methods are compared with each other: the boundary element method (BEM), the finite element method (FEM) and the finite difference method (FDM). In the last two methods anisotropic conducting compartments can conveniently be introduced. Then the focus will be set on the use of reciprocity in EEG source localization. It is introduced to speed up the forward calculations which are here performed for each electrode position rather than for each dipole position. Solving Poisson's equation utilizing FEM and FDM corresponds to solving a large sparse linear system. Iterative methods are required to solve these sparse linear systems. The following iterative methods are discussed: successive over-relaxation, conjugate gradients method and algebraic multigrid method. CONCLUSION: Solving the forward problem has been well documented in the past decades. In the past simplified spherical head models are used, whereas nowadays a combination of imaging modalities are used to accurately describe the geometry of the head model. Efforts have been done on realistically describing the shape of the head model, as well as the heterogenity of the tissue types and realistically determining the conductivity. However, the determination and validation of the in vivo conductivity values is still an important topic in this field. In addition, more studies have to be done on the influence of all the parameters of the head model and of the numerical techniques on the solution of the forward problem.


Asunto(s)
Mapeo Encefálico , Encéfalo/fisiología , Electroencefalografía , Humanos , Modelos Neurológicos
16.
Phys Med Biol ; 51(2): 391-405, 2006 Jan 21.
Artículo en Inglés | MEDLINE | ID: mdl-16394346

RESUMEN

In classical SPECT with parallel hole collimation, the sensitivity is constant over the field of view (FOV). This is no longer the case if a rotating slat collimator with planar photon collection is used: there will be a significant variation of the sensitivity within the FOV. Since not compensating for this inhomogeneous sensitivity distribution would result in non-quantitative images, an accurate knowledge of the sensitivity is mandatory to account for it during reconstruction. On the other hand, the spatial resolution versus distance dependency remains unaltered compared to parallel hole collimation. For deriving the sensitivity, different factors have to be taken into account: a first factor concerns the intrinsic detector properties and will be incorporated into the calculations as a detection efficiency term depending on the incident angle. The calculations are based on a second and more pronounced factor: the collimator and detector geometry. Several assumptions will be made for the calculation of the sensitivity formulae and it will be proven that these calculations deliver a valid prediction of the sensitivity at points far enough from the collimator. To derive a close field model which also accounts for points close to the collimator surface, a modified calculation method is used. After calculating the sensitivity in one plane it is easy to obtain the tomographic sensitivity. This is done by rotating the sensitivity maps for spin and camera rotation. The results derived from the calculations are then compared to simulation results and both show good agreement after including the aforementioned detection efficiency term. The validity of the calculations is also proven by measuring the sensitivity in the FOV of a prototype rotating slat gamma camera. An expression for the resolution of these planar collimation systems is obtained. It is shown that for equal collimator dimensions the same resolution-distance relationship is obtained as for parallel hole collimators. Although, a better spatial resolution can be obtained with our prototype camera due to the smaller pitch of the slats. This can be achieved without a major drop in system sensitivity due to the fact that the slats consist of less collimator material compared to a parallel hole collimator. The accuracy of the calculated resolution is proven by comparison with Monte Carlo simulation and measurement resolution values.


Asunto(s)
Algoritmos , Simulación por Computador , Tomografía Computarizada de Emisión de Fotón Único/instrumentación , Diseño de Equipo , Cámaras gamma , Método de Montecarlo , Tomografía Computarizada de Emisión de Fotón Único/métodos
17.
Phys Med Biol ; 51(12): 3105-25, 2006 Jun 21.
Artículo en Inglés | MEDLINE | ID: mdl-16757866

RESUMEN

In discrete detector PET, natural pixels are image basis functions calculated from the response of detector pairs. By using reconstruction with natural pixel basis functions, the discretization of the object into a predefined grid can be avoided. Here, we propose to use generalized natural pixel reconstruction. Using this approach, the basis functions are not the detector sensitivity functions as in the natural pixel case but uniform parallel strips. The backprojection of the strip coefficients results in the reconstructed image. This paper proposes an easy and efficient way to generate the matrix M directly by Monte Carlo simulation. Elements of the generalized natural pixel system matrix are formed by calculating the intersection of a parallel strip with the detector sensitivity function. These generalized natural pixels are easier to use than conventional natural pixels because the final step from solution to a square pixel representation is done by simple backprojection. Due to rotational symmetry in the PET scanner, the matrix M is block circulant and only the first blockrow needs to be stored. Data were generated using a fast Monte Carlo simulator using ray tracing. The proposed method was compared to a listmode MLEM algorithm, which used ray tracing for doing forward and backprojection. Comparison of the algorithms with different phantoms showed that an improved resolution can be obtained using generalized natural pixel reconstruction with accurate system modelling. In addition, it was noted that for the same resolution a lower noise level is present in this reconstruction. A numerical observer study showed the proposed method exhibited increased performance as compared to a standard listmode EM algorithm. In another study, more realistic data were generated using the GATE Monte Carlo simulator. For these data, a more uniform contrast recovery and a better contrast-to-noise performance were observed. It was observed that major improvements in contrast recovery were obtained with MLEM when the correct system matrix was used instead of simple ray tracing. The correct modelling was the major cause of improved contrast for the same background noise. Less important factors were the choice of the algorithm (MLEM performed better than ART) and the basis functions (generalized natural pixels gave better results than pixels).


Asunto(s)
Algoritmos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Almacenamiento y Recuperación de la Información/métodos , Tomografía de Emisión de Positrones/métodos , Procesamiento de Señales Asistido por Computador , Simulación por Computador , Humanos , Modelos Biológicos , Modelos Estadísticos , Método de Montecarlo , Fantasmas de Imagen , Tomografía de Emisión de Positrones/instrumentación , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
18.
Phys Med Biol ; 50(16): 3787-806, 2005 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-16077227

RESUMEN

Many implementations of electroencephalogram (EEG) dipole source localization neglect the anisotropical conductivities inherent to brain tissues, such as the skull and white matter anisotropy. An examination of dipole localization errors is made in EEG source analysis, due to not incorporating the anisotropic properties of the conductivity of the skull and white matter. First, simulations were performed in a 5 shell spherical head model using the analytical formula. Test dipoles were placed in three orthogonal planes in the spherical head model. Neglecting the skull anisotropy results in a dipole localization error of, on average, 13.73 mm with a maximum of 24.51 mm. For white matter anisotropy these values are 11.21 mm and 26.3 mm, respectively. Next, a finite difference method (FDM), presented by Saleheen and Kwong (1997 IEEE Trans. Biomed. Eng. 44 800-9), is used to incorporate the anisotropy of the skull and white matter. The FDM method has been validated for EEG dipole source localization in head models with all compartments isotropic as well as in a head model with white matter anisotropy. In a head model with skull anisotropy the numerical method could only be validated if the 3D lattice was chosen very fine (grid size < or = 2 mm).


Asunto(s)
Anisotropía , Electroencefalografía/instrumentación , Electroencefalografía/métodos , Algoritmos , Encéfalo/patología , Mapeo Encefálico/métodos , Humanos , Modelos Estadísticos , Modelos Teóricos , Fantasmas de Imagen , Cráneo/patología , Programas Informáticos
19.
IEEE Trans Med Imaging ; 22(3): 323-31, 2003 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-12760550

RESUMEN

In this paper, we propose a robust wavelet domain method for noise filtering in medical images. The proposed method adapts itself to various types of image noise as well as to the preference of the medical expert; a single parameter can be used to balance the preservation of (expert-dependent) relevant details against the degree of noise reduction. The algorithm exploits generally valid knowledge about the correlation of significant image features across the resolution scales to perform a preliminary coefficient classification. This preliminary coefficient classification is used to empirically estimate the statistical distributions of the coefficients that represent useful image features on the one hand and mainly noise on the other. The adaptation to the spatial context in the image is achieved by using a wavelet domain indicator of the local spatial activity. The proposed method is of low complexity, both in its implementation and execution time. The results demonstrate its usefulness for noise suppression in medical ultrasound and magnetic resonance imaging. In these applications, the proposed method clearly outperforms single-resolution spatially adaptive algorithms, in terms of quantitative performance measures as well as in terms of visual quality of the images.


Asunto(s)
Algoritmos , Aumento de la Imagen/métodos , Procesamiento de Señales Asistido por Computador , Procesos Estocásticos , Encéfalo/anatomía & histología , Simulación por Computador , Corazón/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética/métodos , Modelos Biológicos , Modelos Estadísticos , Cintigrafía , Ultrasonografía/métodos , Interfaz Usuario-Computador
20.
Phys Med Biol ; 49(11): 2337-50, 2004 Jun 07.
Artículo en Inglés | MEDLINE | ID: mdl-15248581

RESUMEN

In this paper, we will describe a theoretical model of the spatial uncertainty for a line of response, due to the imperfect localization of events on the detector heads of a positron emission tomography (PET) camera. The forward acquisition problem is modelled by a Gaussian distribution of the position of interaction on a detector head, centred at the measured position. The a posteriori probability that an event originates from a certain point in the field of view (FOV) is calculated by integrating all the possible lines of response (LORs) through this point, weighted with the Gaussian detection likelihood at the LOR's end points. We have calculated these a posteriori probabilities both for perpendicular and oblique coincidences. For the oblique coincidence case it was necessary to incorporate the effect of the crystal thickness in the calculations. We found in the perpendicular incidence case as well as in the oblique incidence case that the probability density function cannot be analytically expressed in a closed form, and it was thus calculated by means of numerical integration. A Gaussian was fit to the transversal profiles of this function for a given distance to the detectors. From these fits, we can conclude that the profiles can be accurately approximated by a Gaussian, both for perpendicular and oblique coincidences. The FWHM reaches a maximum at the detector heads, and decreases towards the centre of the FOV, as was expected. Afterwards we extended this two-dimensional model to three dimensions, thus incorporating the spatial uncertainty in both transversal directions. This theoretical model was then evaluated and a very good agreement was found with theoretical calculations and with geometric Monte Carlo simulations. Possible improvements for the above-described incorporation of crystal thickness are discussed. Therefore a detailed Monte Carlo study has been performed in order to investigate the interaction probability of photons of different energies along their path in several detector materials dedicated to PET. Finally two approaches for the incorporation of this theoretical model in reconstruction algorithms are outlined.


Asunto(s)
Algoritmos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Modelos Biológicos , Modelos Estadísticos , Tomografía de Emisión de Positrones/métodos , Almacenamiento y Recuperación de la Información/métodos , Análisis Numérico Asistido por Computador , Reproducibilidad de los Resultados , Dispersión de Radiación , Sensibilidad y Especificidad , Procesamiento de Señales Asistido por Computador
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA