Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 67
Filtrar
1.
J Opt Soc Am A Opt Image Sci Vis ; 39(5): 959-968, 2022 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-36215457

RESUMO

There are two types of uncertainty in image reconstructions from list-mode data: statistical and deterministic. One source of statistical uncertainty is the finite number of attributes of the detected particles, which are sampled from a probability distribution on the attribute space. A deterministic source of uncertainty is the effect that null functions of the imaging operator have on reconstructed pixel or voxel values. Quantifying the reduction in this deterministic source of uncertainty when more attributes are measured for each detected particle is the subject of this work. Specifically, upper bounds on an error metric are derived to quantify the error introduced in the reconstruction by the presence of null functions, and these upper bounds are shown to be reduced when the number of attributes is increased. These bounds are illustrated with an example of a two-dimensional single photon emission computed tomography (SPECT) system where the depth of interaction in the scintillation crystal is added to the attribute vector.

2.
J Opt Soc Am A Opt Image Sci Vis ; 39(7): 1275-1281, 2022 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-36215613

RESUMO

For imaging instruments that are in space looking toward the Earth, there are a variety of nuisance signals that can get in the way of performing certain imaging tasks, such as reflections from clouds, reflections from the ground, and emissions from the OH-airglow layer. A method for separating these signals is to perform tomographic reconstructions from the collected data. A lingering struggle for this method is altitude-axis resolution and different methods for helping with it are discussed. An implementation of the maximum likelihood expectation maximization algorithm is given and analyzed.

3.
J Opt Soc Am A Opt Image Sci Vis ; 39(7): 1282-1288, 2022 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-36215614

RESUMO

This paper is part 2 of two papers that explore performing tomographic reconstructions from a space platform. A simplified model of short-wave infrared emissions in the atmosphere is given. Simulations were performed that tested the effectiveness of reconstructions given signal amplitude, frequency, signal-to-noise ratio, number of iterations run, and others. Maximum likelihood expectation maximization is shown to be effective for reconstructing low signal cases.

4.
Med Phys ; 48(10): 5959-5973, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34390587

RESUMO

PURPOSE: The goal is to provide a sufficient condition for the invertibility of a multi-energy (ME) X-ray transform. The energy-dependent X-ray attenuation profiles can be represented by a set of coefficients using the Alvarez-Macovski (AM) method. An ME X-ray transform is a mapping from N AM coefficients to N noise-free energy-weighted measurements, where N ≥ 2 . METHODS: We apply a general invertibility theorem to prove the equivalence of global and local invertibility for an ME X-ray transform. We explore the global invertibility through testing whether the Jacobian of the mapping J ( A ) has zero values over the support of the mapping. The Jacobian of an arbitrary ME X-ray transform is an integration over all spectral measurements. A sufficient condition for J ( A ) ≠ 0 for all A is that the integrand of J ( A ) is ≥ 0 (or ≤ 0 ) everywhere. Note that the trivial case of the integrand equals 0 everywhere is ignored. Using symmetry, we simplified the integrand of the Jacobian to three factors that are determined by the total attenuation, the basis functions, and the energy-weighting functions, respectively. The factor related to the total attenuation is always positive; hence, the invertibility of the X-ray transform can be determined by testing the signs of the other two factors. Furthermore, we use the Cramér-Rao lower bound (CRLB) to characterize the noise-induced estimation uncertainty and provide a maximum-likelihood (ML) estimator. RESULTS: The factor related to the basis functions is always negative when the photoelectric/Compton/Rayleigh basis functions are used and K-edge materials are not considered. The sign of the energy-weighting factor depends on the system source spectra and the detector response functions. For four special types of X-ray detectors, the sign of this factor stays the same over the integration range. Therefore, when these four types of detectors are used for imaging non-K-edge materials, the ME X-ray transform is globally invertible. The same framework can be used to study an arbitrary ME X-ray imaging system, for example, when K-edge materials are present. Furthermore, the ML estimator we presented is an unbiased, efficient estimator and can be used for a wide range of scenes. CONCLUSIONS: We have provided a framework to study the invertibility of an arbitrary ME X-ray transform and proved the global invertibility for four types of systems.


Assuntos
Fótons , Radiografia , Raios X
5.
J Opt Soc Am A Opt Image Sci Vis ; 38(3): 387-394, 2021 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-33690468

RESUMO

An upper bound is derived for a figure of merit that quantifies the error in reconstructed pixel or voxel values induced by the presence of null functions for any list-mode system. It is shown that this upper bound decreases as the region in attribute space occupied by the allowable attribute vectors expands. This upper bound allows quantification of the reduction in this error when this type of expansion is implemented. Of course, reconstruction error is also caused by system noise in the data, which has to be treated statistically, but we will not be addressing that problem here. This method is not restricted to pixelized or voxelized reconstructions and can in fact be applied to any region of interest. The upper bound for pixelized reconstructions is demonstrated on a list-mode 2D Radon transform example. The expansion in the attribute space is implemented by doubling the number of views. The results show how the pixel size and number of views both affect the upper bound on reconstruction error from null functions. This reconstruction error can be averaged over all pixels to give a single number or can be plotted as a function on the pixel grid. Both approaches are demonstrated for the example system. In conclusion, this method can be applied to any list-mode system for which the system operator is known and could be used in the design of the systems and reconstruction algorithms.

6.
Inverse Probl ; 36(8)2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-33071423

RESUMO

The potential to perform attenuation and scatter compensation (ASC) in single-photon emission computed tomography (SPECT) imaging without a separate transmission scan is highly significant. In this context, attenuation in SPECT is primarily due to Compton scattering, where the probability of Compton scatter is proportional to the attenuation coefficient of the tissue and the energy of the scattered photon and the scattering angle are related. Based on this premise, we investigated whether the SPECT scattered-photon data acquired in list-mode (LM) format and including the energy information can be used to estimate the attenuation map. For this purpose, we propose a Fisher-information-based method that yields the Cramer-Rao bound (CRB) for the task of jointly estimating the activity and attenuation distribution using only the SPECT emission data. In the process, a path-based formalism to process the LM SPECT emission data, including the scattered-photon data, is proposed. The Fisher information method was implemented on NVIDIA graphics processing units (GPU) for acceleration. The method was applied to analyze the information content of SPECT LM emission data, which contains up to first-order scattered events, in a simulated SPECT system with parameters modeling a clinical system using realistic computational studies with 2-D digital synthetic and anthropomorphic phantoms. The method was also applied to LM data containing up to second-order scatter for a synthetic phantom. Experiments with anthropomorphic phantoms simulated myocardial perfusion and dopamine transporter (DaT)-Scan SPECT studies. The results show that the CRB obtained for the attenuation and activity coefficients was typically much lower than the true value of these coefficients. An increase in the number of detected photons yielded lower CRB for both the attenuation and activity coefficients. Further, we observed that systems with better energy resolution yielded a lower CRB for the attenuation coefficient. Overall, the results provide evidence that LM SPECT emission data, including the scattered photons, contains information to jointly estimate the activity and attenuation coefficients.

7.
J Opt Soc Am A Opt Image Sci Vis ; 37(2): 174-181, 2020 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-32118895

RESUMO

The van Trees inequality relates the ensemble mean squared error of an estimator to a Bayesian version of the Fisher information. The Ziv-Zakai inequality relates the ensemble mean squared error of an estimator to the minimum probability of error for the task of detecting a change in the parameter. In this work we complete this circle by deriving an inequality that relates this minimum probability of error to the Bayesian version of the Fisher information. We discuss this result for both scalar and vector parameters. In the process we discover that an important intermediary in the calculation is the total variation of the posterior probability distribution function for the parameter given the data. This total variation is of interest in its own right since it may be easier to compute than the other figures of merit discussed here.

8.
J Opt Soc Am A Opt Image Sci Vis ; 37(3): 450-457, 2020 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-32118929

RESUMO

List-mode data are increasingly being used in single photon emission computed tomography (SPECT) and positron emission tomography (PET) imaging, among other imaging modalities. However, there are still many imaging designs that effectively bin list-mode data before image reconstruction or other estimation tasks are performed. Intuitively, the binning operation should result in a loss of information. In this work, we show that this is true for Fisher information and provide a computational method for quantifying the information loss. In the end, we find that the information loss depends on three factors. The first factor is related to the smoothness of the mean data function for the list-mode data. The second factor is the actual object being imaged. Finally, the third factor is the binning scheme in relation to the other two factors.

9.
J Opt Soc Am A Opt Image Sci Vis ; 36(7): 1209-1214, 2019 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-31503959

RESUMO

We derive a connection between the performance of statistical estimators and the performance of the ideal observer on related detection tasks. Specifically, we show how the task-specific Shannon information for the task of detecting a change in a parameter is related to the Fisher information and to the Bayesian Fisher information. We have previously shown that this Shannon information is related via an integral transform to the minimum probability of error on the same task. We then outline a circle of relations starting with this minimum probability of error and ensemble mean squared error for an estimator via the Ziv-Zakai inequality, then the ensemble mean squared error and the Bayesian Fisher information via the van Trees inequality, and finally the Bayesian Fisher information and the Shannon information for a detection task via the work presented here.

10.
J Med Imaging (Bellingham) ; 6(1): 015502, 2019 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-30820441

RESUMO

Previously published work on joint estimation/detection tasks has focused on the area under the estimation receiver operating characteristic (EROC) curve as a figure of merit (FOM) for these tasks in imaging. Another FOM for these joint tasks is the Bayesian risk, where a cost is assigned to all detection outcomes and to the estimation errors, and then averaged over all sources of randomness in the object ensemble and the imaging system. Important elements of the cost function, which are not included in standard EROC analysis, are that the cost for a false positive depends on the estimate produced for the parameter vector, and the cost for a false negative depends on the true value of the parameter vector. The ideal observer in this setting, which minimizes the risk, is derived for two applications. In the first application, a parameter vector is estimated only in the case of a signal present classification. For the second application, parameter vectors are estimated for either classification, and these vectors may have different dimensions. In both applications, a risk-based estimation receiver operating characteristic curve is defined and an expression for the area under this curve is given. It is also shown that, for some observers, this area may be estimated from a two alternative forced choice test. Finally, if the classifier is optimized for a given estimator, then it is shown that the slope of the risk-based estimation receiver operating characteristic curve at each point is the negative of the ratio of the prior probabilities for the two classes.

11.
PLoS One ; 13(6): e0199823, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29958271

RESUMO

Many different physiological processes affect the growth of malignant lesions and their response to therapy. Each of these processes is spatially and genetically heterogeneous; dynamically evolving in time; controlled by many other physiological processes, and intrinsically random and unpredictable. The objective of this paper is to show that all of these properties of cancer physiology can be treated in a unified, mathematically rigorous way via the theory of random processes. We treat each physiological process as a random function of position and time within a tumor, defining the joint statistics of such functions via the infinite-dimensional characteristic functional. The theory is illustrated by analyzing several models of drug delivery and response of a tumor to therapy. To apply the methodology to precision cancer therapy, we use maximum-likelihood estimation with Emission Computed Tomography (ECT) data to estimate unknown patient-specific physiological parameters, ultimately demonstrating how to predict the probability of tumor control for an individual patient undergoing a proposed therapeutic regimen.


Assuntos
Antineoplásicos/uso terapêutico , Sistemas de Liberação de Medicamentos/métodos , Modelos Biológicos , Neoplasias , Tomografia Computadorizada de Emissão , Humanos , Neoplasias/tratamento farmacológico , Neoplasias/fisiopatologia
12.
IEEE Trans Radiat Plasma Med Sci ; 1(5): 435-443, 2017 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-29276799

RESUMO

A method for optimization of an adaptive Single Photon Emission Computed Tomography (SPECT) system is presented. Adaptive imaging systems can quickly change their hardware configuration in response to data being generated in order to improve image quality for a specific task. In this work we simulate an adaptive SPECT system and propose a method for finding the adaptation that maximizes the performance on a signal estimation task. To start with, a simulated object model containing a spherical signal is imaged with a scout configuration. A Markov-Chain Monte Carlo (MCMC) technique utilizes the scout data to generate an ensemble of possible objects consistent with the scout data. This object ensemble is imaged by numerous simulated hardware configurations and for each system estimates of signal activity, size and location are calculated via the Scanning Linear Estimator (SLE). A figure of merit, based on a Modified Dice Index (MDI), quantifies the performance of each imaging configuration and it allows for optimization of the adaptive SPECT. This figure of merit is calculated by multiplying two terms: the first term uses the definition of the Dice similarity index to determine the percent of overlap between the actual and the estimated spherical signal, the second term utilizes an exponential function that measures the squared error for the activity estimate. The MDI combines the error in estimates of activity, size, and location, in one convenient metric and it allows for simultaneous optimization of the SPECT system with respect to all the estimated signal parameters. The results of our optimizations indicate that the adaptive system performs better than a non-adaptive one in conditions where the diagnostic scan has a low photon count - on the order of thousand photons per projection. In a statistical study, we optimized the SPECT system for one hundred unique objects and demonstrated that the average MDI on an estimation task is 0.84 for the adaptive system and 0.65 for the non-adaptive system.

13.
J Opt Soc Am A Opt Image Sci Vis ; 33(8): 1464-75, 2016 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-27505644

RESUMO

Characteristic functionals are one of the main analytical tools used to quantify the statistical properties of random fields and generalized random fields. The viewpoint taken here is that a random field is the correct model for the ensemble of objects being imaged by a given imaging system. In modern digital imaging systems, random fields are not used to model the reconstructed images themselves since these are necessarily finite dimensional. After a brief introduction to the general theory of characteristic functionals, many examples relevant to imaging applications are presented. The propagation of characteristic functionals through both a binned and list-mode imaging system is also discussed. Methods for using characteristic functionals and image data to estimate population parameters and classify populations of objects are given. These methods are based on maximum likelihood and maximum a posteriori techniques in spaces generated by sampling the relevant characteristic functionals through the imaging operator. It is also shown how to calculate a Fisher information matrix in this space. These estimators and classifiers, and the Fisher information matrix, can then be used for image quality assessment of imaging systems.

14.
J Opt Soc Am A Opt Image Sci Vis ; 33(6): 1214-25, 2016 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-27409452

RESUMO

We present a new method for computing optimized channels for estimation tasks that is feasible for high-dimensional image data. Maximum-likelihood (ML) parameter estimates are challenging to compute from high-dimensional likelihoods. The dimensionality reduction from M measurements to L channels is a critical advantage of channelized quadratic estimators (CQEs), since estimating likelihood moments from channelized data requires smaller sample sizes and inverting a smaller covariance matrix is easier. The channelized likelihood is then used to form ML estimates of the parameter(s). In this work we choose an imaging example in which the second-order statistics of the image data depend upon the parameter of interest: the correlation length. Correlation lengths are used to approximate background textures in many imaging applications, and in these cases an estimate of the correlation length is useful for pre-whitening. In a simulation study we compare the estimation performance, as measured by the root-mean-squared error (RMSE), of correlation length estimates from CQE and power spectral density (PSD) distribution fitting. To abide by the assumptions of the PSD method we simulate an ergodic, isotropic, stationary, and zero-mean random process. These assumptions are not part of the CQE formalism. The CQE method assumes a Gaussian channelized likelihood that can be a valid for non-Gaussian image data, since the channel outputs are formed from weighted sums of the image elements. We have shown that, for three or more channels, the RMSE of CQE estimates of correlation length is lower than conventional PSD estimates. We also show that computing CQE by using a standard nonlinear optimization method produces channels that yield RMSE within 2% of the analytic optimum. CQE estimates of anisotropic correlation length estimation are reported to demonstrate this technique on a two-parameter estimation problem.

15.
J Opt Soc Am A Opt Image Sci Vis ; 33(5): 930-7, 2016 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-27140890

RESUMO

We show how Shannon information is mathematically related to receiver operating characteristic (ROC) analysis for multiclass classification problems in imaging. In particular, the minimum probability of error for the ideal observer, as a function of the prior probabilities for each class, determines the Shannon Information for the classification task, also considered as a function of the prior probabilities on the classes. In the process, we show how an ROC hypersurface that has been studied by other researchers is mathematically related to a Shannon information ROC (SIROC) hypersurface. In fact, the ROC hypersurface completely determines the SIROC hypersurface via a non-local integral transform on the ROC hypersurface. We also show that both hypersurfaces are convex and satisfy other geometrical relationships via the Legendre transform.

16.
J Opt Soc Am A Opt Image Sci Vis ; 33(3): 286-92, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26974897

RESUMO

Shannon information is defined for imaging tasks where signal detection is combined with parameter estimation. The first task considered is when the parameters are associated with the signal and parameter estimates are only produced when the signal is present. The second task examined is when the parameters are associated with the object being imaged, and parameter estimates are produced whether the signal is present or not. In each case, the Shannon information expression has a simple additive form.


Assuntos
Imagem Óptica/métodos , Modelos Teóricos , Distribuição Normal , Razão Sinal-Ruído
17.
Nucl Instrum Methods Phys Res A ; 805: 72-86, 2016 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-26644631

RESUMO

The Fano factor of an integer-valued random variable is defined as the ratio of its variance to its mean. Correlation between the outputs of two photomultiplier tubes on opposite faces of a scintillation crystal was used to estimate the Fano factor of photoelectrons and scintillation photons. Correlations between the integrals of the detector outputs were used to estimate the photoelectron and photon Fano factor for YAP:Ce, SrI2:Eu and CsI:Na scintillator crystals. At 662 keV, SrI2:Eu was found to be sub-Poisson, while CsI:Na and YAP:Ce were found to be super-Poisson. An experiment setup inspired from the Hanbury Brown and Twiss experiment was used to measure the correlations as a function of time between the outputs of two photomultiplier tubes looking at the same scintillation event. A model of the scintillation and the detection processes was used to generate simulated detector outputs as a function of time for different values of Fano factor. The simulated outputs from the model for different Fano factors was compared to the experimentally measured detector outputs to estimate the Fano factor of the scintillation photons for YAP:Ce, LaBr3:Ce scintillator crystals. At 662 keV, LaBr3:Ce was found to be sub-Poisson, while YAP:Ce was found to be close to Poisson.

18.
IEEE Trans Nucl Sci ; 62(1): 42-56, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26523069

RESUMO

The Fano factor for an integer-valued random variable is defined as the ratio of its variance to its mean. Light from various scintillation crystals have been reported to have Fano factors from sub-Poisson (Fano factor < 1) to super-Poisson (Fano factor > 1). For a given mean, a smaller Fano factor implies a smaller variance and thus less noise. We investigated if lower noise in the scintillation light will result in better spatial and energy resolutions. The impact of Fano factor on the estimation of position of interaction and energy deposited in simple gamma-camera geometries is estimated by two methods - calculating the Cramér-Rao bound and estimating the variance of a maximum likelihood estimator. The methods are consistent with each other and indicate that when estimating the position of interaction and energy deposited by a gamma-ray photon, the Fano factor of a scintillator does not affect the spatial resolution. A smaller Fano factor results in a better energy resolution.

19.
J Opt Soc Am A Opt Image Sci Vis ; 32(7): 1288-301, 2015 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-26367158

RESUMO

Shannon information (SI) and the ideal-observer receiver operating characteristic (ROC) curve are two different methods for analyzing the performance of an imaging system for a binary classification task, such as the detection of a variable signal embedded within a random background. In this work we describe a new ROC curve, the Shannon information receiver operator curve (SIROC), that is derived from the SI expression for a binary classification task. We then show that the ideal-observer ROC curve and the SIROC have many properties in common, and are equivalent descriptions of the optimal performance of an observer on the task. This equivalence is described mathematically by an integral transform that maps the ideal-observer ROC curve onto the SIROC. This then leads to an integral transform relating the minimum probability of error, as a function of the odds against a signal, to the conditional entropy, as a function of the same variable. This last relation then gives us the complete mathematical equivalence between ideal-observer ROC analysis and SI analysis of the classification task for a given imaging system. We also find that there is a close relationship between the area under the ideal-observer ROC curve, which is often used as a figure of merit for imaging systems and the area under the SIROC. Finally, we show that the relationships between the two curves result in new inequalities relating SI to ROC quantities for the ideal observer.


Assuntos
Entropia , Processamento de Imagem Assistida por Computador/métodos , Curva ROC , Área Sob a Curva , Distribuição Normal , Variações Dependentes do Observador , Probabilidade , Razão Sinal-Ruído
20.
Phys Med Biol ; 60(18): 7359-85, 2015 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-26350439

RESUMO

Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Medicina Nuclear , Imagens de Fantasmas , Fótons , Tomografia Computadorizada de Emissão de Fóton Único/métodos , Simulação por Computador , Humanos , Processamento de Sinais Assistido por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA