Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Phys Med Biol ; 67(19)2022 10 04.
Artículo en Inglés | MEDLINE | ID: mdl-36113437

RESUMEN

Objective.Study the performance of a spectral reconstruction method for Compton imaging of polychromatic sources and compare it to standard Compton reconstruction based on the selection of photopeak events.Approach.The proposed spectral and the standard photopeak reconstruction methods are used to reconstruct images from simulated sources emitting simultaneously photons of 140, 245, 364 and 511 keV. Data are simulated with perfect and realistic energy resolutions and including Doppler broadening. We compare photopeak and spectral reconstructed images both qualitatively and quantitatively by means of activity recovery coefficient and spatial resolution.Main results.The presented method allows improving the images of polychromatic sources with respect to standard reconstruction methods. The main reasons for this improvement are the increase of available statistics and the reduction of contamination from higher initial photon energies. The reconstructed images present lower noise, higher activity recovery coefficient and better spatial resolution. The improvements become more sensible as the energy resolution of the detectors decreases.Significance.Compton cameras have been studied for their capability of imaging polychromatic sources, thus allowing simultaneous imaging of multiple radiotracers. In such scenarios, Compton images are conventionally reconstructed for each emission energy independently, selecting only those measured events depositing a total energy within a fixed window around the known emission lines. We propose to employ a spectral image reconstruction method for polychromatic sources, which allows increasing the available statistics by using the information from events with partial energy deposition. The detector energy resolution influences the energy window used to select photopeak events and therefore the level of contamination by higher energies. The spectral method is expected to have a more important impact as the detector resolution worsens. In this paper we focus on energy ranges from nuclear medical imaging and we consider realistic energy resolutions.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Diagnóstico por Imagen/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Método de Montecarlo , Fantasmas de Imagen , Fotones
2.
Phys Med Biol ; 67(20)2022 10 14.
Artículo en Inglés | MEDLINE | ID: mdl-36162406

RESUMEN

Objective.Cone-beam computed tomography is becoming more and more popular in applications such as 3D dental imaging. Iterative methods compared to the standard Feldkamp algorithm have shown improvements in image quality of reconstruction of low-dose acquired data despite their long computing time. An interesting aspect of iterative methods is their ability to include prior information such as sparsity-constraint. While a large panel of optimization algorithms along with their adaptation to tomographic problems are available, they are mainly studied on 2D parallel or fan-beam data. The issues raised by 3D CBCT and moreover by truncated projections are still poorly understood.Approach.We compare different carefully designed optimization schemes in the context of realistic 3D dental imaging. Besides some known algorithms, SIRT-TV and MLEM, we investigate the primal-dual hybrid gradient (PDHG) approach and a newly proposed MLEM-TV optimizer. The last one is alternating EM steps and TV-denoising, combination not yet investigated for CBCT. Experiments are performed on both simulated data from a 3D jaw phantom and data acquired with a dental clinical scanner.Main results.With some adaptations to the specificities of CBCT operators, PDHG and MLEM-TV algorithms provide the best reconstruction quality. These results were obtained by comparing the full-dose image with a low-dose image and an ultra low-dose image.Significance.The convergence speed of the original iterative methods is hampered by the conical geometry and significantly reduced compared to parallel geometries. We promote the pre-conditioned version of PDHG and we propose a pre-conditioned version of the MLEM-TV algorithm. To the best of our knowledge, this is the first time PDHG and convergent MLEM-TV algorithms are evaluated on experimental dental CBCT data, where constraints such as projection truncation and presence of metal have to be jointly overcome.


Asunto(s)
Tomografía Computarizada de Haz Cónico Espiral , Algoritmos , Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen
3.
Med Phys ; 49(5): 2952-2964, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-35218039

RESUMEN

PURPOSE: Computed tomography (CT) is a technique of choice to image bone structure at different scales. Methods to enhance the quality of degraded reconstructions obtained from low-dose CT data have shown impressive results recently, especially in the realm of supervised deep learning. As the choice of the loss function affects the reconstruction quality, it is necessary to focus on the way neural networks evaluate the correspondence between predicted and target images during the training stage. This is even more true in the case of bone microarchitecture imaging at high spatial resolution where both the quantitative analysis of bone mineral density (BMD) and bone microstructure is essential for assessing diseases such as osteoporosis. Our aim is thus to evaluate the quality of reconstruction on key metrics for diagnosis depending on the loss function that has been used for training the neural network. METHODS: We compare and analyze volumes that are reconstructed with neural networks trained with pixelwise, structural, and adversarial loss functions or with a combination of them. We perform realistic simulations of various low-dose acquisitions of bone microarchitecture. Our comparative study is performed with metrics that have an interest regarding the diagnosis of bone diseases. We therefore focus on bone-specific metrics such as bone volume and the total volume (BV and TV), resolution, connectivity assessed with the Euler number, and quantitative analysis of BMD to evaluate the quality of reconstruction obtained with networks trained with the different loss functions. RESULTS: We find that using L 1 $L_1$ norm as the pixelwise loss is the best choice compared to L 2 $L_2$ or no pixelwise loss since it improves resolution without deteriorating other metrics. Visual Geometry Group (VGG) perceptual loss, especially when combined with an adversarial loss, allows to better retrieve topological and morphological parameters of bone microarchitecture compared to Structural SIMilarity (SSIM) index. This however leads to a decreased resolution performance. The adversarial loss enhances the reconstruction performance in terms of BMD distribution accuracy. CONCLUSIONS: In order to retrieve the quantitative and structural characteristics of bone microarchitecture that are essential for postreconstruction diagnosis, our results suggest to use L 1 $L_1$ norm as part of the loss function. Then, trade-offs should be made depending on the application: VGG perceptual loss improves accuracy in terms of connectivity at the cost of a deteriorated resolution, and adversarial losses help better retrieve BMD distribution while significantly increasing the training time.


Asunto(s)
Aprendizaje Profundo , Densidad Ósea , Huesos/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X
4.
Ultramicroscopy ; 189: 109-123, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-29655113

RESUMEN

Fast tomography in Environmental Transmission Electron Microscopy (ETEM) is of a great interest for in situ experiments where it allows to observe 3D real-time evolution of nanomaterials under operating conditions. In this context, we are working on speeding up the acquisition step to a few seconds mainly with applications on nanocatalysts. In order to accomplish such rapid acquisitions of the required tilt series of projections, a modern 4K high-speed camera is used, that can capture up to 100 images per second in a 2K binning mode. However, due to the fast rotation of the sample during the tilt procedure, noise and blur effects may occur in many projections which in turn would lead to poor quality reconstructions. Blurred projections make classical reconstruction algorithms inappropriate and require the use of prior information. In this work, a regularized algebraic reconstruction algorithm named SIRT-FISTA-TV is proposed. The performance of this algorithm using blurred data is studied by means of a numerical blur introduced into simulated images series to mimic possible mechanical instabilities/drifts during fast acquisitions. We also present reconstruction results from noisy data to show the robustness of the algorithm to noise. Finally, we show reconstructions with experimental datasets and we demonstrate the interest of fast tomography with an ultra-fast acquisition performed under environmental conditions, i.e. gas and temperature, in the ETEM. Compared to classically used SIRT and SART approaches, our proposed SIRT-FISTA-TV reconstruction algorithm provides higher quality tomograms allowing easier segmentation of the reconstructed volume for a better final processing and analysis.

5.
Phys Med Biol ; 61(8): 3127-46, 2016 Apr 21.
Artículo en Inglés | MEDLINE | ID: mdl-27008459

RESUMEN

In proton therapy, the prompt-γ (PG) radiation produced by the interactions between protons and matter is related to the range of the beam in the patient. Tomographic Compton imaging is currently studied to establish a PG image and verify the treatment. However the quality of the reconstructed images depends on a number of factors such as the volume attenuation, the spatial and energy resolutions of the detectors, incomplete absorptions of high energy photons and noise from other particles reaching the camera. The impact of all these factors was not assessed in details. In this paper we investigate the influence of the PG energy spectrum on the reconstructed images. To this aim, we describe the process from the Monte Carlo simulation of the proton irradiation, through the Compton imaging of the PG distribution, up to the image reconstruction with a statistical MLEM method. We identify specific PG energy windows that are more relevant to detect discrepancies with the treatment plan. We find that for the simulated Compton device, the incomplete absorption of the photons with energy above about 2 MeV prevents the observation of the PG distributions at specific energies. It also leads to blurred images and smooths the distal slope of the 1D PG profiles obtained as projections on the central beam axis. We show that a selection of the events produced by γ photons having deposited almost all their energy in the camera allows to largely improve the images, a result that emphasizes the importance of the choice of the detector. However, this initial-energy-based selection is not accessible in practice. We then propose a method to estimate the range of the PG profile both for specific deposited-energy windows and for the full spectrum emission. The method relies on two parameters. We use a learning approach for their estimation and we show that it allows to detect few millimeter shifts of the PG profiles.


Asunto(s)
Algoritmos , Diagnóstico por Imagen/métodos , Rayos gamma , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen , Terapia de Protones/métodos , Monitoreo de Radiación , Simulación por Computador , Humanos , Método de Montecarlo
6.
Phys Med Biol ; 61(1): 243-64, 2016 Jan 07.
Artículo en Inglés | MEDLINE | ID: mdl-26639159

RESUMEN

This paper addresses the problem of evaluating the system matrix and the sensitivity for iterative reconstruction in Compton camera imaging. Proposed models and numerical calculation strategies are compared through the influence they have on the three-dimensional reconstructed images. The study attempts to address four questions. First, it proposes an analytic model for the system matrix. Second, it suggests a method for its numerical validation with Monte Carlo simulated data. Third, it compares analytical models of the sensitivity factors with Monte Carlo simulated values. Finally, it shows how the system matrix and the sensitivity calculation strategies influence the quality of the reconstructed images.


Asunto(s)
Imagenología Tridimensional/métodos , Modelos Estadísticos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Humanos
7.
IEEE Trans Image Process ; 23(1): 332-41, 2014 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-24196864

RESUMEN

During the acquisition process with the Compton gamma-camera, integrals of the intensity distribution of the source on conical surfaces are measured. They represent the Compton projections of the intensity. The inversion of the Compton transform reposes on a particular Fourier-Slice theorem. This paper proposes a filtered backprojection algorithm for image reconstruction from planar Compton camera data. We show how different projections are related together and how they may be combined in the tomographical reconstruction step. Considering a simulated Compton imaging system, we conclude that the proposed method yields accurate reconstructed images for simple sources. An elongation of the source in the direction orthogonal to the camera may be observed and is to be related to the truncation of the projections induced by the finite extent of the device. This phenomenon was previously observed with other reconstruction methods, e.g., iterative maximum likelihood expectation maximization. The redundancy of the Compton transform is thus an important feature for the reduction of noise in Compton images, since the ideal assumptions of infinite width and observation time are never met in practice. We show that a selection operated on the set of data allows to partially get around projection truncation, at the expense of an enhancement of the noise in the images.


Asunto(s)
Algoritmos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Almacenamiento y Recuperación de la Información/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Cintigrafía/métodos , Técnica de Sustracción , Análisis Numérico Asistido por Computador , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Procesamiento de Señales Asistido por Computador
8.
Front Neuroinform ; 3: 27, 2009.
Artículo en Inglés | MEDLINE | ID: mdl-19826470

RESUMEN

CamBAfx is a workflow application designed for both researchers who use workflows to process data (consumers) and those who design them (designers). It provides a front-end (user interface) optimized for data processing designed in a way familiar to consumers. The back-end uses a pipeline model to represent workflows since this is a common and useful metaphor used by designers and is easy to manipulate compared to other representations like programming scripts. As an Eclipse Rich Client Platform application, CamBAfx's pipelines and functions can be bundled with the software or downloaded post-installation. The user interface contains all the workflow facilities expected by consumers. Using the Eclipse Extension Mechanism designers are encouraged to customize CamBAfx for their own pipelines. CamBAfx wraps a workflow facility around neuroinformatics software without modification. CamBAfx's design, licensing and Eclipse Branding Mechanism allow it to be used as the user interface for other software, facilitating exchange of innovative computational tools between originating labs.

9.
Neuroimage ; 25(1): 141-58, 2005 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-15734351

RESUMEN

Fractional Gaussian noise (fGn) provides a parsimonious model for stationary increments of a self-similar process parameterised by the Hurst exponent, H, and variance, sigma2. Fractional Gaussian noise with H < 0.5 demonstrates negatively autocorrelated or antipersistent behaviour; fGn with H > 0.5 demonstrates 1/f, long memory or persistent behaviour; and the special case of fGn with H = 0.5 corresponds to classical Gaussian white noise. We comparatively evaluate four possible estimators of fGn parameters, one method implemented in the time domain and three in the wavelet domain. We show that a wavelet-based maximum likelihood (ML) estimator yields the most efficient estimates of H and sigma2 in simulated fGn with 0 < H < 1. Applying this estimator to fMRI data acquired in the "resting" state from healthy young and older volunteers, we show empirically that fGn provides an accommodating model for diverse species of fMRI noise, assuming adequate preprocessing to correct effects of head movement, and that voxels with H > 0.5 tend to be concentrated in cortex whereas voxels with H < 0.5 are more frequently located in ventricles and sulcal CSF. The wavelet-ML estimator can be generalised to estimate the parameter vector beta for general linear modelling (GLM) of a physiological response to experimental stimulation and we demonstrate nominal type I error control in multiple testing of beta, divided by its standard error, in simulated and biological data under the null hypothesis beta = 0. We illustrate these methods principally by showing that there are significant differences between patients with early Alzheimer's disease (AD) and age-matched comparison subjects in the persistence of fGn in the medial and lateral temporal lobes, insula, dorsal cingulate/medial premotor cortex, and left pre- and postcentral gyrus: patients with AD had greater persistence of resting fMRI noise (larger H) in these regions. Comparable abnormalities in the AD patients were also identified by a permutation test of local differences in the first-order autoregression AR(1) coefficient, which was significantly more positive in patients. However, we found that the Hurst exponent provided a more sensitive metric than the AR(1) coefficient to detect these differences, perhaps because neurophysiological changes in early AD are naturally better described in terms of abnormal salience of long memory dynamics than a change in the strength of association between immediately consecutive time points. We conclude that parsimonious mapping of fMRI noise properties in terms of fGn parameters efficiently estimated in the wavelet domain is feasible and can enhance insight into the pathophysiology of Alzheimer's disease.


Asunto(s)
Enfermedad de Alzheimer/diagnóstico , Artefactos , Encéfalo/fisiopatología , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética/estadística & datos numéricos , Distribución Normal , Anciano , Anciano de 80 o más Años , Encéfalo/patología , Femenino , Análisis de Fourier , Fractales , Humanos , Funciones de Verosimilitud , Masculino , Valores de Referencia , Estadística como Asunto
10.
Neuroimage ; 23 Suppl 1: S234-49, 2004.
Artículo en Inglés | MEDLINE | ID: mdl-15501094

RESUMEN

The discrete wavelet transform (DWT) is widely used for multiresolution analysis and decorrelation or "whitening" of nonstationary time series and spatial processes. Wavelets are naturally appropriate for analysis of biological data, such as functional magnetic resonance images of the human brain, which often demonstrate scale invariant or fractal properties. We provide a brief formal introduction to key properties of the DWT and review the growing literature on its application to fMRI. We focus on three applications in particular: (i) wavelet coefficient resampling or "wavestrapping" of 1-D time series, 2- to 3-D spatial maps and 4-D spatiotemporal processes; (ii) wavelet-based estimators for signal and noise parameters of time series regression models assuming the errors are fractional Gaussian noise (fGn); and (iii) wavelet shrinkage in frequentist and Bayesian frameworks to support multiresolution hypothesis testing on spatially extended statistic maps. We conclude that the wavelet domain is a rich source of new concepts and techniques to enhance the power of statistical analysis of human fMRI data.


Asunto(s)
Encéfalo/anatomía & histología , Algoritmos , Teorema de Bayes , Reacciones Falso Positivas , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Modelos Neurológicos , Distribución Normal , Oxígeno/sangre , Estadísticas no Paramétricas , Terminología como Asunto
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...