Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39
Filtrar
1.
Eur J Nucl Med Mol Imaging ; 36(12): 1994-2001, 2009 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-19526237

RESUMO

PURPOSE: The aim of this study is to optimize different parameters in the time-of-flight (TOF) reconstruction for the Philips GEMINI TF. The use of TOF in iterative reconstruction introduces additional variables to be optimized compared to conventional PET reconstruction. The different parameters studied are the TOF kernel width, the kernel truncation (used to reduce reconstruction time) and the scatter correction method. METHODS: These parameters are optimized using measured phantom studies. All phantom studies were acquired with a very high number of counts to limit the effects of noise. A high number of iterations (33 subsets and 3 iterations) was used to reach convergence. The figures of merit are the uniformity in the background, the cold spot recovery and the hot spot contrast. As reference results we used the non-TOF reconstruction of the same data sets. RESULTS: It is shown that contrast recovery loss can only be avoided if the kernel is extended to more than 3 standard deviations. To obtain uniform reconstructions the recommended scatter correction is TOF single scatter simulation (SSS). This also leads to improved cold spot recovery and hot spot contrast. While the daily measurements of the system show a timing resolution in the range of 590­600 ps, the optimal reconstructions are obtained with a TOF kernel full-width at half-maximum (FWHM) of 650­700 ps. The optimal kernel width seems to be less critical for the recovered contrast but has an important effect on the background uniformity. Using smaller or wider kernels results in a less uniform background and reduced hot and cold contrast recovery. CONCLUSION: The different parameters studied have a large effect on the quantitative accuracy of the reconstructed images. The optimal settings from this study can be used as a guideline to make an objective comparison of the gains obtained with TOF PET versus PET reconstruction.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Humanos , Processamento de Imagem Assistida por Computador/instrumentação , Tomografia por Emissão de Pósitrons/instrumentação , Espalhamento de Radiação , Fatores de Tempo
2.
Med Phys ; 36(4): 1053-60, 2009 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-19472610

RESUMO

The GEANT4 application for tomographic emission (GATE) is one of the most detailed Monte Carlo simulation tools for SPECT and PET. It allows for realistic phantoms, complex decay schemes, and a large variety of detector geometries. However, only a fraction of the information in each particle history is available for postprocessing. In order to extend the analysis capabilities of GATE, a flexible framework was developed. This framework allows all detected events to be subdivided according to their type: In PET, true coincidences from others, and in SPECT, geometrically collimated photons from others. The framework of the authors can be applied to any isotope, phantom, and detector geometry available in GATE. It is designed to enhance the usability of GATE for the study of contamination and for the investigation of the properties of current and future prototype detectors. The authors apply the framework to a case study of Bexxar, first assuming labeling with 124I, then with 131I. It is shown that with 124I PET, results with an optimized window improve upon those with the standard window but achieve less than half of the ideal improvement. Nevertheless, 124I PET shows improved resolution compared to 131I SPECT with triple-energy-window scatter correction.


Assuntos
Tomografia por Emissão de Pósitrons/métodos , Tomografia Computadorizada de Emissão de Fóton Único/métodos , Simulação por Computador , Humanos , Radioisótopos do Iodo/química , Rim/diagnóstico por imagem , Método de Monte Carlo , Imagens de Fantasmas , Fótons , Física/métodos , Tomografia por Emissão de Pósitrons/instrumentação , Radioisótopos/química , Radiometria/métodos , Espalhamento de Radiação , Software , Tórax/metabolismo , Tomografia Computadorizada de Emissão de Fóton Único/instrumentação
3.
Phys Med Biol ; 54(3): 715-29, 2009 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-19131666

RESUMO

As an alternative to the use of traditional parallel hole collimators, SPECT imaging can be performed using rotating slat collimators. While maintaining the spatial resolution, a gain in image quality could be expected from the higher photon collection efficiency of this type of collimator. However, the use of iterative methods to do fully three-dimensional (3D) reconstruction is computationally much more expensive and furthermore involves slow convergence compared to a classical SPECT reconstruction. It has been proposed to do 3D reconstruction by splitting the system matrix into two separate matrices, forcing the reconstruction to first estimate the sinograms from the rotating slat SPECT data before estimating the image. While alleviating the computational load by one order of magnitude, this split matrix approach would result in fast computation of the projections in an iterative algorithm, but does not solve the problem of slow convergence. There is thus a need for an algorithm which speeds up convergence while maintaining image quality for rotating slat collimated SPECT cameras. Therefore, we developed a reconstruction algorithm based on the split matrix approach which allows both a fast calculation of the forward and backward projection and a fast convergence. In this work, an algorithm of the maximum likelihood expectation maximization (MLEM) type, obtained from a split system matrix MLEM reconstruction, is proposed as a reconstruction method for rotating slat collimated SPECT data. Here, we compare this new algorithm to the conventional split system matrix MLEM method and to a gold standard fully 3D MLEM reconstruction algorithm on the basis of computational load, convergence and contrast-to-noise. Furthermore, ordered subsets expectation maximization (OSEM) implementations of these three algorithms are compared. Calculation of computational load and convergence for the different algorithms shows a speedup for the new method of 38 and 426 compared to the split matrix MLEM approach and the fully 3D MLEM respectively and a speedup of 16 and 21 compared to the split matrix OSEM and the fully 3D OSEM respectively. A contrast-to-noise study based on simulated data shows that our new approach has comparable accuracy as the fully 3D reconstruction method. The algorithm developed in this study allows iterative image reconstruction of rotating slat collimated SPECT data with equal image quality in a comparable amount of computation time as a classical SPECT reconstruction.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Armazenamento e Recuperação da Informação/métodos , Tomografia Computadorizada de Emissão de Fóton Único/instrumentação , Tomografia Computadorizada de Emissão de Fóton Único/métodos , Desenho de Equipamento , Análise de Falha de Equipamento , Humanos , Imagens de Fantasmas , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
4.
Phys Med Biol ; 54(6): 1673-89, 2009 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-19242052

RESUMO

The simultaneous recording of electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) can give new insights into how the brain functions. However, the strong electromagnetic field of the MR scanner generates artifacts that obscure the EEG and diminish its readability. Among them, the ballistocardiographic artifact (BCGa) that appears on the EEG is believed to be related to blood flow in scalp arteries leading to electrode movements. Average artifact subtraction (AAS) techniques, used to remove the BCGa, assume a deterministic nature of the artifact. This assumption may be too strong, considering the blood flow related nature of the phenomenon. In this work we propose a new method, based on canonical correlation analysis (CCA) and blind source separation (BSS) techniques, to reduce the BCGa from simultaneously recorded EEG-fMRI. We optimized the method to reduce the user's interaction to a minimum. When tested on six subjects, recorded in 1.5 T or 3 T, the average artifact extracted with BSS-CCA and AAS did not show significant differences, proving the absence of systematic errors. On the other hand, when compared on the basis of intra-subject variability, we found significant differences and better performance of the proposed method with respect to AAS. We demonstrated that our method deals with the intrinsic subject variability specific to the artifact that may cause averaging techniques to fail.


Assuntos
Artefatos , Balistocardiografia , Eletroencefalografia/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Humanos
5.
J Magn Reson ; 190(2): 189-99, 2008 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-18023218

RESUMO

Diffusion weighted magnetic resonance imaging enables the visualization of fibrous tissues such as brain white matter. The validation of this non-invasive technique requires phantoms with a well-known structure and diffusion behavior. This paper presents anisotropic diffusion phantoms consisting of parallel fibers. The diffusion properties of the fiber phantoms are measured using diffusion weighted magnetic resonance imaging and bulk NMR measurements. To enable quantitative evaluation of the measurements, the diffusion in the interstitial space between fibers is modeled using Monte Carlo simulations of random walkers. The time-dependent apparent diffusion coefficient and kurtosis, quantifying the deviation from a Gaussian diffusion profile, are simulated in 3D geometries of parallel fibers with varying packing geometries and packing densities. The simulated diffusion coefficients are compared to the theory of diffusion in porous media, showing a good agreement. Based on the correspondence between simulations and experimental measurements, the fiber phantoms are shown to be useful for the quantitative validation of diffusion imaging on clinical MRI-scanners.


Assuntos
Mapeamento Encefálico/métodos , Imagem de Difusão por Ressonância Magnética/métodos , Imagens de Fantasmas , Algoritmos , Anisotropia , Água Corporal/metabolismo , Simulação por Computador , Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Método de Monte Carlo , Fibras Nervosas
6.
Clin Neurophysiol ; 119(8): 1756-1770, 2008 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-18499517

RESUMO

OBJECTIVE: Methods for the detection of epileptiform events can be broadly divided into two main categories: temporal detection methods that exploit the EEG's temporal characteristics, and spatial detection methods that base detection on the results of an implicit or explicit source analysis. We describe how the framework of a spatial detection method was extended to improve its performance by including temporal information. This results in a method that provides (i) automated localization of an epileptogenic focus and (ii) detection of focal epileptiform events in an EEG recording. For the detection, only one threshold value needs to be set. METHODS: The method comprises five consecutive steps: (1) dipole source analysis in a moving window, (2) automatic selection of focal brain activity, (3) dipole clustering to arrive at the identification of the epileptiform cluster, (4) derivation of a spatio-temporal template of the epileptiform activity, and (5) template matching. Routine EEG recordings from eight paediatric patients with focal epilepsy were labelled independently by two experts. The method was evaluated in terms of (i) ability to identify the epileptic focus, (ii) validity of the derived template, and (iii) detection performance. The clustering performance was evaluated using a leave-one-out cross validation. Detection performance was evaluated using Precision-Recall curves and compared to the performance of two temporal (mimetic and wavelet based) and one spatial (dipole analysis based) detection methods. RESULTS: The method succeeded in identifying the epileptogenic focus in seven of the eight recordings. For these recordings, the mean distance between the epileptic focus estimated by the method and the region indicated by the labelling of the experts was 8mm. Except for two EEG recordings where the dipole clustering step failed, the derived template corresponded to the epileptiform activity marked by the experts. Over the eight EEGs, the method showed a mean sensitivity and selectivity of 92 and 77%, respectively. CONCLUSIONS: The method allows automated localization of the epileptogenic focus and shows good agreement with the region indicated by the labelling of the experts. If the dipole clustering step is successful, the method allows a detection of the focal epileptiform events, and gave a detection performance comparable or better to that of the other methods. SIGNIFICANCE: The identification and quantification of epileptiform events is of considerable importance in the diagnosis of epilepsy. Our method allows the automatic identification of the epileptic focus, which is of value in epilepsy surgery. The method can also be used as an offline exploration tool for focal EEG activity, displaying the dipole clusters and corresponding time series.


Assuntos
Mapeamento Encefálico , Encéfalo/fisiopatologia , Eletroencefalografia , Epilepsias Parciais/fisiopatologia , Algoritmos , Criança , Pré-Escolar , Análise por Conglomerados , Eletrodos , Epilepsias Parciais/patologia , Feminino , Humanos , Masculino , Reprodutibilidade dos Testes , Processamento de Sinais Assistido por Computador , Fatores de Tempo
7.
Med Phys ; 35(4): 1476-85, 2008 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-18491542

RESUMO

Geant4 application for tomographic emission (GATE) is a geometry and tracking 4 (Geant4) application toolkit for accurate simulation of positron emission tomography and single photon emission computed tomography (SPECT) scanners. GATE simulations with realistic count levels are very CPU-intensive as they take up to several days with single-CPU computers. Therefore, we implemented both standard forced detection (FD) and convolution-based forced detection (CFD) with multiple projection sampling, which allows the simulation of all projections simultaneously in GATE. In addition, a FD and CFD specialized Geant4 navigator was developed to overcome the detailed but slow tracking algorithms in Geant4. This article is focused on the implementation and validation of these aforementioned developments. The results show a good agreement between the FD and CFD versus analog GATE simulations for Tc-99m SPECT. These combined developments accelerate GATE by three orders of magnitude in the case of FD. CFD is an additional two orders of magnitude faster than FD. This renders realistic simulations feasible within minutes on a single CPU. Future work will extend our framework to higher energy isotopes, which will require the inclusion of a septal penetration and collimator scatter model.


Assuntos
Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Software , Tomografia Computadorizada de Emissão de Fóton Único/métodos , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Fatores de Tempo
8.
Phys Med Biol ; 53(7): 1989-2002, 2008 Apr 07.
Artigo em Inglês | MEDLINE | ID: mdl-18356576

RESUMO

The main remaining challenge for a gamma camera is to overcome the existing trade-off between collimator spatial resolution and system sensitivity. This problem, strongly limiting the performance of parallel hole collimated gamma cameras, can be overcome by applying new collimator designs such as rotating slat (RS) collimators which have a much higher photon collection efficiency. The drawback of a RS collimated gamma camera is that, even for obtaining planar images, image reconstruction is needed, resulting in noise accumulation. However, nowadays iterative reconstruction techniques with accurate system modeling can provide better image quality. Because the impact of this modeling on image quality differs from one system to another, an objective assessment of the image quality obtained with a RS collimator is needed in comparison to classical projection images obtained using a parallel hole (PH) collimator. In this paper, a comparative study of image quality, achieved with system modeling, is presented. RS data are reconstructed to planar images using maximum likelihood expectation maximization (MLEM) with an accurate Monte Carlo derived system matrix while PH projections are deconvolved using a Monte Carlo derived point-spread function. Contrast-to-noise characteristics are used to show image quality for cold and hot spots of varying size. Influence of the object size and contrast is investigated using the optimal contrast-to-noise ratio (CNR(o)). For a typical phantom setup, results show that cold spot imaging is slightly better for a PH collimator. For hot spot imaging, the CNR(o) of the RS images is found to increase with increasing lesion diameter and lesion contrast while it decreases when background dimensions become larger. Only for very large background dimensions in combination with low contrast lesions, the use of a PH collimator could be beneficial for hot spot imaging. In all other cases, the RS collimator scores better. Finally, the simulation of a planar bone scan on a RS collimator revealed a hot spot contrast improvement up to 54% compared to a classical PH bone scan.


Assuntos
Câmaras gama , Interpretação de Imagem Assistida por Computador/métodos , Algoritmos , Computadores , Humanos , Processamento de Imagem Assistida por Computador , Funções Verossimilhança , Modelos Estatísticos , Modelos Teóricos , Método de Monte Carlo , Metástase Neoplásica , Neoplasias/patologia , Imagens de Fantasmas , Software , Tomografia Computadorizada por Raios X/métodos
9.
Phys Med Biol ; 53(7): 1877-94, 2008 Apr 07.
Artigo em Inglês | MEDLINE | ID: mdl-18364544

RESUMO

To improve the EEG source localization in the brain, the conductivities used in the head model play a very important role. In this study, we focus on the modeling of the anisotropic conductivity of the white matter. The anisotropic conductivity profile can be derived from diffusion weighted magnetic resonance images (DW-MRI). However, deriving these anisotropic conductivities from diffusion weighted MR images of the white matter is not straightforward. In the literature, two methods can be found for calculating the conductivity from the diffusion weighted images. One method uses a fixed value for the ratio of the conductivity in different directions, while the other method uses a conductivity profile obtained from a linear scaling of the diffusion ellipsoid. We propose a model which can be used to derive the conductivity profile from the diffusion tensor images. This model is based on the variable anisotropic ratio throughout the white matter and is a combination of the linear relationship as stated in the literature, with a constraint on the magnitude of the conductivity tensor (also known as the volume constraint). This approach is stated in the paper as approach A. In our study we want to investigate dipole estimation differences due to using a more simplified model for white matter anisotropy (approach B), while the electrode potentials are derived using a head model with a more realistic approach for the white matter anisotropy (approach A). We used a realistic head model, in which the forward problem was solved using a finite difference method that can incorporate anisotropic conductivities. As error measures we considered the dipole location error and the dipole orientation error. The results show that the dipole location errors are all below 10 mm and have an average of 4 mm in gray matter regions. The dipole orientation errors ranged up to 66.4 degrees, and had a mean of, on average, 11.6 degrees in gray matter regions. In a qualitative manner, the results show that the orientation and location error is dependent on the orientation of the test dipole. The location error is larger when the orientation of the test dipole is similar to the orientation of the anisotropy, while the orientation error is larger when the orientation of the test dipole deviates from the orientation of the anisotropy. From these results, we can conclude that the modeling of white matter anisotropy plays an important role in EEG source localization. More specifically, accurate source localization will require an accurate modeling of the white matter conductivity profile in each voxel.


Assuntos
Eletroencefalografia/métodos , Algoritmos , Animais , Anisotropia , Encéfalo/patologia , Mapeamento Encefálico/métodos , Simulação por Computador , Difusão , Eletrodos , Eletroencefalografia/instrumentação , Desenho de Equipamento , Humanos , Modelos Biológicos , Neurônios/metabolismo , Reprodutibilidade dos Testes , Processamento de Sinais Assistido por Computador
10.
Phys Med Biol ; 53(19): 5405-19, 2008 Oct 07.
Artigo em Inglês | MEDLINE | ID: mdl-18765890

RESUMO

Diffusion weighted magnetic resonance imaging offers a non-invasive tool to explore the three-dimensional structure of brain white matter in clinical practice. Anisotropic diffusion hardware phantoms are useful for the quantitative validation of this technique. This study provides guidelines on how to manufacture anisotropic fibre phantoms in a reproducible way and which fibre material to choose to obtain a good quality of the diffusion weighted images. Several fibre materials are compared regarding their effect on the diffusion MR measurements of the water molecules inside the phantoms. The diffusion anisotropy influencing material properties are the fibre density and diameter, while the fibre surface relaxivity and magnetic susceptibility determine the signal-to-noise ratio. The effect on the T(2)-relaxation time of water in the phantoms has been modelled and the diffusion behaviour inside the fibre phantoms has been quantitatively evaluated using Monte Carlo random walk simulations.


Assuntos
Imagem de Difusão por Ressonância Magnética/métodos , Difusão , Imagens de Fantasmas , Anisotropia , Imagem de Difusão por Ressonância Magnética/instrumentação , Magnetismo , Reprodutibilidade dos Testes , Propriedades de Superfície , Fatores de Tempo , Água/química
11.
Med Phys ; 34(5): 1766-78, 2007 May.
Artigo em Inglês | MEDLINE | ID: mdl-17555258

RESUMO

The use of a temporal B-spline basis for the reconstruction of dynamic positron emission tomography data was investigated. Maximum likelihood (ML) reconstructions using an expectation maximization framework and maximum A-posteriori (MAP) reconstructions using the generalized expectation maximization framework were evaluated. Different parameters of the B-spline basis of such as order, number of basis functions and knot placing were investigated in a reconstruction task using simulated dynamic list-mode data. We found that a higher order basis reduced both the bias and variance. Using a higher number of basis functions in the modeling of the time activity curves (TACs) allowed the algorithm to model faster changes of the TACs, however, the TACs became noisier. We have compared ML, Gaussian postsmoothed ML and MAP reconstructions. The noise level in the ML reconstructions was controlled by varying the number of basis functions. The MAP algorithm penalized the integrated squared curvature of the reconstructed TAC. The postsmoothed ML was always outperformed in terms of bias and variance properties by the MAP and ML reconstructions. A simple adaptive knot placing strategy was also developed and evaluated. It is based on an arc length redistribution scheme during the reconstruction. The free knot reconstruction allowed a more accurate reconstruction while reducing the noise level especially for fast changing TACs such as blood input functions. Limiting the number of temporal basis functions combined with the adaptive knot placing strategy is in this case advantageous for regularization purposes when compared to the other regularization techniques.


Assuntos
Interpretação de Imagem Assistida por Computador , Reconhecimento Automatizado de Padrão , Tomografia por Emissão de Pósitrons/métodos , Intensificação de Imagem Radiográfica/métodos , Funções Verossimilhança , Imagens de Fantasmas , Técnica de Subtração
12.
Med Phys ; 34(6): 1926-33, 2007 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-17654895

RESUMO

Geometry and tracking (GEANT4) is a Monte Carlo package designed for high energy physics experiments. It is used as the basis layer for Monte Carlo simulations of nuclear medicine acquisition systems in GEANT4 Application for Tomographic Emission (GATE). GATE allows the user to realistically model experiments using accurate physics models and time synchronization for detector movement through a script language contained in a macro file. The downside of this high accuracy is long computation time. This paper describes a platform independent computing approach for running GATE simulations on a cluster of computers in order to reduce the overall simulation time. Our software automatically creates fully resolved, nonparametrized macros accompanied with an on-the-fly generated cluster specific submit file used to launch the simulations. The scalability of GATE simulations on a cluster is investigated for two imaging modalities, positron emission tomography (PET) and single photon emission computed tomography (SPECT). Due to a higher sensitivity, PET simulations are characterized by relatively high data output rates that create rather large output files. SPECT simulations, on the other hand, have lower data output rates but require a long collimator setup time. Both of these characteristics hamper scalability as a function of the number of CPUs. The scalability of PET simulations is improved here by the development of a fast output merger. The scalability of SPECT simulations is improved by greatly reducing the collimator setup time. Accordingly, these two new developments result in higher scalability for both PET and SPECT simulations and reduce the computation time to more practical values.


Assuntos
Redes de Comunicação de Computadores , Metodologias Computacionais , Interpretação de Imagem Assistida por Computador/métodos , Modelos Biológicos , Processamento de Sinais Assistido por Computador , Software , Tomografia Computadorizada de Emissão/métodos , Algoritmos , Simulação por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
13.
Phys Med Biol ; 52(23): 6781-94, 2007 Dec 07.
Artigo em Inglês | MEDLINE | ID: mdl-18029975

RESUMO

Carnosine has been shown to be present in the skeletal muscle and in the brain of a variety of animals and humans. Despite the various physiological functions assigned to this metabolite, its exact role remains unclear. It has been suggested that carnosine plays a role in buffering in the intracellular physiological pHi range in skeletal muscle as a result of accepting hydrogen ions released in the development of fatigue during intensive exercise. It is thus postulated that the concentration of carnosine is an indicator for the extent of the buffering capacity. However, the determination of the concentration of this metabolite has only been performed by means of muscle biopsy, which is an invasive procedure. In this paper, we utilized proton magnetic resonance spectroscopy (1H MRS) in order to perform absolute quantification of carnosine in vivo non-invasively. The method was verified by phantom experiments and in vivo measurements in the calf muscles of athletes and untrained volunteers. The measured mean concentrations in the soleus and the gastrocnemius muscles were found to be 2.81 +/- 0.57/4.8 +/- 1.59 mM (mean +/- SD) for athletes and 2.58 +/- 0.65/3.3 +/- 0.32 mM for untrained volunteers, respectively. These values are in agreement with previously reported biopsy-based results. Our results suggest that 1H MRS can provide an alternative method for non-invasively determining carnosine concentration in human calf muscle in vivo.


Assuntos
Algoritmos , Carnosina/análise , Espectroscopia de Ressonância Magnética/métodos , Músculo Esquelético/metabolismo , Humanos , Prótons , Coxa da Perna , Distribuição Tecidual
14.
Cancer Biother Radiopharm ; 22(3): 423-30, 2007 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-17651050

RESUMO

I-131 is a frequently used isotope for radionuclide therapy. This technique for cancer treatment requires a pre-therapeutic dosimetric study. The latter is usually performed (for this radionuclide) by directly imaging the uptake of the therapeutic radionuclide in the body or by replacing it by one of its isotopes, which are more suitable for imaging. This study aimed to compare the image quality that can be achieved by three iodine isotopes: I-131 and I-123 for single-photon emission computed tomography imaging, and I-124 for positron emission tomography imaging. The imaging characteristics of each isotope were investigated by simulated data. Their spectrums, point-spread functions, and contrast-recovery curves were drawn and compared. I-131 was imaged with a high-energy all-purpose (HEAP) collimator, whereas two collimators were compared for I-123: low-energy high-resolution (LEHR) and medium energy (ME). No mechanical collimation was used for I-124. The influence of small high-energy peaks (>0.1%) on the main energy window contamination were evaluated. Furthermore, the effect of a scattering medium was investigated and the triple energy window (TEW) correction was used for spectral-based scatter correction. Results showed that I-123 gave the best results with a LEHR collimator when the scatter correction was applied. Without correction, the ME collimator reduced the effects of high-energy contamination. I-131 offered the worst results. This can be explained by the large amount of septal penetration from the photopeak and by the collimator, which gave a low spatial resolution. I-124 gave the best imaging properties owing to its electronic collimation (high sensitivity) and a short coincidence time window.


Assuntos
Radioisótopos do Iodo , Simulação por Computador , Humanos , Processamento de Imagem Assistida por Computador , Radioisótopos do Iodo/classificação , Peso Molecular , Imagens de Fantasmas , Sensibilidade e Especificidade , Tomografia/métodos
15.
J Neuroeng Rehabil ; 4: 46, 2007 Nov 30.
Artigo em Inglês | MEDLINE | ID: mdl-18053144

RESUMO

BACKGROUND: The aim of electroencephalogram (EEG) source localization is to find the brain areas responsible for EEG waves of interest. It consists of solving forward and inverse problems. The forward problem is solved by starting from a given electrical source and calculating the potentials at the electrodes. These evaluations are necessary to solve the inverse problem which is defined as finding brain sources which are responsible for the measured potentials at the EEG electrodes. METHODS: While other reviews give an extensive summary of the both forward and inverse problem, this review article focuses on different aspects of solving the forward problem and it is intended for newcomers in this research field. RESULTS: It starts with focusing on the generators of the EEG: the post-synaptic potentials in the apical dendrites of pyramidal neurons. These cells generate an extracellular current which can be modeled by Poisson's differential equation, and Neumann and Dirichlet boundary conditions. The compartments in which these currents flow can be anisotropic (e.g. skull and white matter). In a three-shell spherical head model an analytical expression exists to solve the forward problem. During the last two decades researchers have tried to solve Poisson's equation in a realistically shaped head model obtained from 3D medical images, which requires numerical methods. The following methods are compared with each other: the boundary element method (BEM), the finite element method (FEM) and the finite difference method (FDM). In the last two methods anisotropic conducting compartments can conveniently be introduced. Then the focus will be set on the use of reciprocity in EEG source localization. It is introduced to speed up the forward calculations which are here performed for each electrode position rather than for each dipole position. Solving Poisson's equation utilizing FEM and FDM corresponds to solving a large sparse linear system. Iterative methods are required to solve these sparse linear systems. The following iterative methods are discussed: successive over-relaxation, conjugate gradients method and algebraic multigrid method. CONCLUSION: Solving the forward problem has been well documented in the past decades. In the past simplified spherical head models are used, whereas nowadays a combination of imaging modalities are used to accurately describe the geometry of the head model. Efforts have been done on realistically describing the shape of the head model, as well as the heterogenity of the tissue types and realistically determining the conductivity. However, the determination and validation of the in vivo conductivity values is still an important topic in this field. In addition, more studies have to be done on the influence of all the parameters of the head model and of the numerical techniques on the solution of the forward problem.


Assuntos
Mapeamento Encefálico , Encéfalo/fisiologia , Eletroencefalografia , Humanos , Modelos Neurológicos
16.
Phys Med Biol ; 51(2): 391-405, 2006 Jan 21.
Artigo em Inglês | MEDLINE | ID: mdl-16394346

RESUMO

In classical SPECT with parallel hole collimation, the sensitivity is constant over the field of view (FOV). This is no longer the case if a rotating slat collimator with planar photon collection is used: there will be a significant variation of the sensitivity within the FOV. Since not compensating for this inhomogeneous sensitivity distribution would result in non-quantitative images, an accurate knowledge of the sensitivity is mandatory to account for it during reconstruction. On the other hand, the spatial resolution versus distance dependency remains unaltered compared to parallel hole collimation. For deriving the sensitivity, different factors have to be taken into account: a first factor concerns the intrinsic detector properties and will be incorporated into the calculations as a detection efficiency term depending on the incident angle. The calculations are based on a second and more pronounced factor: the collimator and detector geometry. Several assumptions will be made for the calculation of the sensitivity formulae and it will be proven that these calculations deliver a valid prediction of the sensitivity at points far enough from the collimator. To derive a close field model which also accounts for points close to the collimator surface, a modified calculation method is used. After calculating the sensitivity in one plane it is easy to obtain the tomographic sensitivity. This is done by rotating the sensitivity maps for spin and camera rotation. The results derived from the calculations are then compared to simulation results and both show good agreement after including the aforementioned detection efficiency term. The validity of the calculations is also proven by measuring the sensitivity in the FOV of a prototype rotating slat gamma camera. An expression for the resolution of these planar collimation systems is obtained. It is shown that for equal collimator dimensions the same resolution-distance relationship is obtained as for parallel hole collimators. Although, a better spatial resolution can be obtained with our prototype camera due to the smaller pitch of the slats. This can be achieved without a major drop in system sensitivity due to the fact that the slats consist of less collimator material compared to a parallel hole collimator. The accuracy of the calculated resolution is proven by comparison with Monte Carlo simulation and measurement resolution values.


Assuntos
Algoritmos , Simulação por Computador , Tomografia Computadorizada de Emissão de Fóton Único/instrumentação , Desenho de Equipamento , Câmaras gama , Método de Monte Carlo , Tomografia Computadorizada de Emissão de Fóton Único/métodos
17.
Phys Med Biol ; 51(12): 3105-25, 2006 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-16757866

RESUMO

In discrete detector PET, natural pixels are image basis functions calculated from the response of detector pairs. By using reconstruction with natural pixel basis functions, the discretization of the object into a predefined grid can be avoided. Here, we propose to use generalized natural pixel reconstruction. Using this approach, the basis functions are not the detector sensitivity functions as in the natural pixel case but uniform parallel strips. The backprojection of the strip coefficients results in the reconstructed image. This paper proposes an easy and efficient way to generate the matrix M directly by Monte Carlo simulation. Elements of the generalized natural pixel system matrix are formed by calculating the intersection of a parallel strip with the detector sensitivity function. These generalized natural pixels are easier to use than conventional natural pixels because the final step from solution to a square pixel representation is done by simple backprojection. Due to rotational symmetry in the PET scanner, the matrix M is block circulant and only the first blockrow needs to be stored. Data were generated using a fast Monte Carlo simulator using ray tracing. The proposed method was compared to a listmode MLEM algorithm, which used ray tracing for doing forward and backprojection. Comparison of the algorithms with different phantoms showed that an improved resolution can be obtained using generalized natural pixel reconstruction with accurate system modelling. In addition, it was noted that for the same resolution a lower noise level is present in this reconstruction. A numerical observer study showed the proposed method exhibited increased performance as compared to a standard listmode EM algorithm. In another study, more realistic data were generated using the GATE Monte Carlo simulator. For these data, a more uniform contrast recovery and a better contrast-to-noise performance were observed. It was observed that major improvements in contrast recovery were obtained with MLEM when the correct system matrix was used instead of simple ray tracing. The correct modelling was the major cause of improved contrast for the same background noise. Less important factors were the choice of the algorithm (MLEM performed better than ART) and the basis functions (generalized natural pixels gave better results than pixels).


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Tomografia por Emissão de Pósitrons/métodos , Processamento de Sinais Assistido por Computador , Simulação por Computador , Humanos , Modelos Biológicos , Modelos Estatísticos , Método de Monte Carlo , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons/instrumentação , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
18.
Phys Med Biol ; 50(16): 3787-806, 2005 Aug 21.
Artigo em Inglês | MEDLINE | ID: mdl-16077227

RESUMO

Many implementations of electroencephalogram (EEG) dipole source localization neglect the anisotropical conductivities inherent to brain tissues, such as the skull and white matter anisotropy. An examination of dipole localization errors is made in EEG source analysis, due to not incorporating the anisotropic properties of the conductivity of the skull and white matter. First, simulations were performed in a 5 shell spherical head model using the analytical formula. Test dipoles were placed in three orthogonal planes in the spherical head model. Neglecting the skull anisotropy results in a dipole localization error of, on average, 13.73 mm with a maximum of 24.51 mm. For white matter anisotropy these values are 11.21 mm and 26.3 mm, respectively. Next, a finite difference method (FDM), presented by Saleheen and Kwong (1997 IEEE Trans. Biomed. Eng. 44 800-9), is used to incorporate the anisotropy of the skull and white matter. The FDM method has been validated for EEG dipole source localization in head models with all compartments isotropic as well as in a head model with white matter anisotropy. In a head model with skull anisotropy the numerical method could only be validated if the 3D lattice was chosen very fine (grid size < or = 2 mm).


Assuntos
Anisotropia , Eletroencefalografia/instrumentação , Eletroencefalografia/métodos , Algoritmos , Encéfalo/patologia , Mapeamento Encefálico/métodos , Humanos , Modelos Estatísticos , Modelos Teóricos , Imagens de Fantasmas , Crânio/patologia , Software
19.
IEEE Trans Med Imaging ; 22(3): 323-31, 2003 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-12760550

RESUMO

In this paper, we propose a robust wavelet domain method for noise filtering in medical images. The proposed method adapts itself to various types of image noise as well as to the preference of the medical expert; a single parameter can be used to balance the preservation of (expert-dependent) relevant details against the degree of noise reduction. The algorithm exploits generally valid knowledge about the correlation of significant image features across the resolution scales to perform a preliminary coefficient classification. This preliminary coefficient classification is used to empirically estimate the statistical distributions of the coefficients that represent useful image features on the one hand and mainly noise on the other. The adaptation to the spatial context in the image is achieved by using a wavelet domain indicator of the local spatial activity. The proposed method is of low complexity, both in its implementation and execution time. The results demonstrate its usefulness for noise suppression in medical ultrasound and magnetic resonance imaging. In these applications, the proposed method clearly outperforms single-resolution spatially adaptive algorithms, in terms of quantitative performance measures as well as in terms of visual quality of the images.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Processamento de Sinais Assistido por Computador , Processos Estocásticos , Encéfalo/anatomia & histologia , Simulação por Computador , Coração/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética/métodos , Modelos Biológicos , Modelos Estatísticos , Cintilografia , Ultrassonografia/métodos , Interface Usuário-Computador
20.
Phys Med Biol ; 47(2): 289-303, 2002 Jan 21.
Artigo em Inglês | MEDLINE | ID: mdl-11837618

RESUMO

Thicker crystals have been used to increase the detection efficiency of gamma cameras for coincidence imaging. This results in a higher detection probability for oblique incidences than for perpendicular incidences. As the point sensitivity at different radial distances is composed of coincidences with different oblique incidences, the thickness of the crystal will have an effect on the sensitivity profiles. To correct this non-uniform sensitivity, a sensitivity map is needed which can be measured or calculated. For dual- or triple-head gamma camera based positron emission tomography (PET) a calculated sensitivity map is preferable because the radius and the head orientation often change between different acquisitions. First, these sensitivity maps are calculated for 2D acquisitions by assuming a linear relationship between the detection efficiency and the crystal thickness. The 2D approximation is reasonable for gamma cameras with a small axial acceptance angle. The results of the 2D approximation show a good agreement with the results of Monte Carlo simulations of different realistic gamma camera configurations. For dual-head gamma cameras the influence on the sensitivity profile is limited. Greater variation of the sensitivity profile is seen on three-headed gamma cameras and correction of this effect is necessary to obtain uniform reconstruction. To increase the sensitivity of gamma cameras, axial collimators with larger acceptance angles are used. To obtain a correct sensitivity for these cameras a sensitivity calculation in 3D is needed. For a fixed camera position the sensitivity is obtained by integrating the detection efficiency over the solid angle formed by the voxel and the intersection of the first detector with the projection of the second detector on the plane of the first detector. The geometric sensitivity is obtained by averaging this for all camera angles. The values obtained show a good agreement with the Monte Carlo simulations for different points in the field of view. Both 2D and 3D sensitivity profiles show the highest influence of the detector thickness on the radial profiles of the U-shape configuration. Taking the detector thickness into account also has an influence on the axial profiles. This influence is maximal in the centre where more oblique coincidences are present. This method is not limited to gamma camera based PET scanners but can be used to calculate the sensitivity of any PET camera with continuous detector blocks.


Assuntos
Tomografia Computadorizada de Emissão/instrumentação , Tomografia Computadorizada de Emissão/métodos , Raios gama , Luz , Modelos Estatísticos , Método de Monte Carlo , Fótons , Espalhamento de Radiação , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa