Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38799476

RESUMO

Undersampling in the frequency domain (k-space) in MRI enables faster data acquisition. In this study, we used a fixed 1D undersampling factor of 5x with only 20% of the k-space collected. The fraction of fully acquired low k-space frequencies were varied from 0% (all aliasing) to 20% (all blurring). The images were reconstructed using a multi-coil SENSE algorithm. We used two-alternative forced choice (2-AFC) and the forced localization tasks with a subtle signal to estimate the human observer performance. The 2-AFC average human observer performance remained fairly constant across all imaging conditions. The forced localization task performance improved from the 0% condition to the 2.5% condition and remained fairly constant for the remaining conditions, suggesting that there was a decrease in task performance only in the pure aliasing situation. We modeled the average human performance using a sparse-difference of Gaussians (SDOG) Hotelling observer model. Because the blurring in the undersampling direction makes the mean signal asymmetric, we explored an adaptation for irregular signals that made the SDOG template asymmetric. To improve the observer performance, we also varied the number of SDOG channels from 3 to 4. We found that despite the asymmetry in the mean signal, both the symmetric and asymmetric models reasonably predicted the human performance in the 2-AFC experiments. However, the symmetric model performed slightly better. We also found that a symmetric SDOG model with 4 channels implemented using a spatial domain convolution and constrained to the possible signal locations reasonably modeled the forced localization human observer results.

2.
Artigo em Inglês | MEDLINE | ID: mdl-37131343

RESUMO

Undersampling in the frequency domain (k-space) in MRI accelerates the data acquisition. Typically, a fraction of the low frequencies is fully collected and the rest are equally undersampled. We used a fixed 1D undersampling factor of 5x where 20% of the k-space lines are collected but varied the fraction of the low k-space frequencies that are fully sampled. We used a range of fully acquired low k-space frequencies from 0% where the primary artifact is aliasing to 20% where the primary artifact is blurring in the undersampling direction. Small lesions were placed in the coil k-space data for fluid-attenuated inversion recovery (FLAIR) brain images from the fastMRI database. The images were reconstructed using a multi-coil SENSE reconstruction with no regularization. We conducted a human observer two-alternative forced choice (2-AFC) study with a signal known exactly and a search task with variable backgrounds for each of the acquisitions. We found that for the 2-AFC task, the average human observer did better with more of the low frequencies being fully sampled. For the search task, we found that after an initial improvement from having none of the low frequencies fully sampled to just 2.5%, the performance remained fairly constant. We found that the performance in the two tasks had a different relationship to the acquired data. We also found that the search task was more consistent with common practice in MRI where a range of frequencies between 5% and 10% of the low frequencies are fully sampled.

3.
J Med Imaging (Bellingham) ; 10(1): 015502, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36852415

RESUMO

Purpose: Task-based assessment of image quality in undersampled magnetic resonance imaging provides a way of evaluating the impact of regularization on task performance. In this work, we evaluated the effect of total variation (TV) and wavelet regularization on human detection of signals with a varying background and validated a model observer in predicting human performance. Approach: Human observer studies used two-alternative forced choice (2-AFC) trials with a small signal known exactly task but with varying backgrounds for fluid-attenuated inversion recovery images reconstructed from undersampled multi-coil data. We used a 3.48 undersampling factor with TV and a wavelet sparsity constraints. The sparse difference-of-Gaussians (S-DOG) observer with internal noise was used to model human observer detection. The internal noise for the S-DOG was chosen to match the average percent correct (PC) in 2-AFC studies for four observers using no regularization. That S-DOG model was used to predict the PC of human observers for a range of regularization parameters. Results: We observed a trend that the human observer detection performance remained fairly constant for a broad range of values in the regularization parameter before decreasing at large values. A similar result was found for the normalized ensemble root mean squared error. Without changing the internal noise, the model observer tracked the performance of the human observers as the regularization was increased but overestimated the PC for large amounts of regularization for TV and wavelet sparsity, as well as the combination of both parameters. Conclusions: For the task we studied, the S-DOG observer was able to reasonably predict human performance with both TV and wavelet sparsity regularizers over a broad range of regularization parameters. We observed a trend that task performance remained fairly constant for a range of regularization parameters before decreasing for large amounts of regularization.

4.
Magn Reson Med ; 67(2): 389-404, 2012 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-21661045

RESUMO

Nonalcoholic fatty liver disease is the most prevalent chronic liver disease in Western societies. MRI can quantify liver fat, the hallmark feature of nonalcoholic fatty liver disease, so long as multiple confounding factors including T(2)* decay are addressed. Recently developed MRI methods that correct for T(2)* to improve the accuracy of fat quantification either assume a common T(2)* (single-T(2)*) for better stability and noise performance or independently estimate the T(2)* for water and fat (dual-T(2)*) for reduced bias, but with noise performance penalty. In this study, the tradeoff between bias and variance for different T(2)* correction methods is analyzed using the Cramér-Rao bound analysis for biased estimators and is validated using Monte Carlo experiments. A noise performance metric for estimation of fat fraction is proposed. Cramér-Rao bound analysis for biased estimators was used to compute the metric at different echo combinations. Optimization was performed for six echoes and typical T(2)* values. This analysis showed that all methods have better noise performance with very short first echo times and echo spacing of ∼π/2 for single-T(2)* correction, and ∼2π/3 for dual-T(2)* correction. Interestingly, when an echo spacing and first echo shift of ∼π/2 are used, methods without T(2)* correction have less than 5% bias in the estimates of fat fraction.


Assuntos
Algoritmos , Fígado Gorduroso/diagnóstico , Aumento da Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Gordura Intra-Abdominal/patologia , Imageamento por Ressonância Magnética/métodos , Artefatos , Imagem Ecoplanar/métodos , Hemossiderose/diagnóstico , Humanos , Fígado/patologia , Modelos Teóricos , Método de Monte Carlo , Sensibilidade e Especificidade , Design de Software
5.
Med Phys ; 39(6): 3240-52, 2012 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-22755707

RESUMO

PURPOSE: To investigate the correlation and stationarity of noise in volumetric computed tomography (CT) using the local discrete noise-power spectrum (NPS) and off-diagonal elements of the covariance matrix of the discrete Fourier transform of noise-only images (denoted Σ(DFT)). Experimental conditions were varied to affect noise correlation and stationarity, the effects were quantified in terms of the NPS and Σ(DFT), and practical considerations in CT performance characterization were identified. METHODS: Cone-beam CT (CBCT) images were acquired using a benchtop system comprising an x-ray tube and flat-panel detector for a range of acquisition techniques (e.g., dose and x-ray scatter) and three phantom configurations hypothesized to impart distinct effects on the NPS and Σ(DFT): (A) air, (B) a 20-cm-diameter water cylinder with a bowtie filter, and (C) the cylinder without a bowtie filter. The NPS and off-diagonal elements of the Σ(DFT) were analyzed as a function of position within the reconstructions. RESULTS: The local NPS varied systematically throughout the axial plane in a manner consistent with changes in fluence transmitted to the detector and view sampling effects. Variability in fluence was manifest in the NPS magnitude-e.g., a factor of ~2 variation in NPS magnitude within the axial plane for case C (cylinder without bowtie), compared to nearly constant NPS magnitude for case B (bowtie filter matched to the cylinder). View sampling effects were most prominent in case A (air) where the variance increased at greater distance from the center of reconstruction and in case C (cylinder) where the NPS exhibited correlations in the radial direction. The effects of detector lag were observed as azimuthal correlation. The cylinder (without bowtie) had the strongest nonstationarity because of the larger variability in fluence transmitted to the detector. The diagonal elements of the Σ(DFT) were equivalent to the NPS estimated from the periodogram, and the average off-diagonal elements of the Σ(DFT) exhibited amplitude of ~1% of the NPS for the experimental conditions investigated. Furthermore, the off-diagonal elements demonstrated fairly long tails of nearly constant amplitude, with magnitude somewhat reduced for experimental conditions associated with greater stationarity (viz., lower Σ(DFT) tails for cases A and B in comparison to case C). CONCLUSIONS: Volumetric CT exhibits nonstationarity in the NPS as hypothesized in relation to fluence uniformity and view sampling. Measurement of the NPS should seek to minimize such changes in noise correlations and include careful reporting of experimental conditions (e.g., phantom design and use of a bowtie filter) and spatial dependence (e.g., analysis at fixed radius within a phantom). Off-diagonal elements of the Σ(DFT) similarly depend on experimental conditions and can be readily computed from the same data as the NPS. This work begins to check assumptions in NPS analysis examine the extent to which NPS is an appropriate descriptor of noise correlations, and investigate the magnitude of off-diagonal elements of the Σ(DFT). While the magnitude of such off-diagonal elements appears to be low, their cumulative effect on space-variant detectability remains to be investigated-e.g., using task-specific figures of merit.


Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Análise de Fourier , Imageamento Tridimensional/métodos
6.
Artigo em Inglês | MEDLINE | ID: mdl-36267385

RESUMO

Two common regularization methods in reconstruction of magnetic resonance images are total variation (TV) which restricts the magnitude of the gradient in the reconstructed image and wavelet sparsity which assumes that the object being imaged is sparse in the wavelet domain. These regularization methods have resulted in images with fewer undersampling artifacts and less noise but introduce their own artifacts. In this work, we extend previous results on modeling of human observer performance for images using TV regularization to also predict human detection performance using wavelet regularization and a combination of wavelet and TV regularization. Small lesions were placed in the coil k-space data for fluid-attenuated inversion recovery (FLAIR) brain images from the fastMRI database. The data was undersampled using an acceleration factor of 3.48. The undersampled data was reconstructed using a range of regularization parameters for both the TV and wavelet regularization. The internal noise level for the sparse difference-of-Gaussians (S-DOG) model observer was chosen to match the average human percent correct in two-alternative forced choice (2-AFC) studies with a signal known exactly with variable backgrounds and no regularization. The S-DOG model largely tracked the human observer results except at large values of the regularization parameter where it outperformed the average human observer. We found that the regularization with either constraint or in combination did not improve human observer performance for this task.

7.
Phys Med Biol ; 66(14)2021 07 16.
Artigo em Inglês | MEDLINE | ID: mdl-34192682

RESUMO

Constrained reconstruction in magnetic resonance imaging (MRI) allows the use of prior information through constraints to improve reconstructed images. These constraints often take the form of regularization terms in the objective function used for reconstruction. Constrained reconstruction leads to images which appear to have fewer artifacts than reconstructions without constraints but because the methods are typically nonlinear, the reconstructed images have artifacts whose structure is hard to predict. In this work, we compared different methods of optimizing the regularization parameter using a total variation (TV) constraint in the spatial domain and sparsity in the wavelet domain for one-dimensional (2.56×) undersampling using variable density undersampling. We compared the mean squared error (MSE), structural similarity (SSIM), L-curve and the area under the receiver operating characteristic (AUC) using a linear discriminant for detecting a small and a large signal. We used a signal-known-exactly task with varying backgrounds in a simulation where the anatomical variation was the major source of clutter for the detection task. Our results show that the AUC dependence on regularization parameters varies with the imaging task (i.e. the signal being detected). The choice of regularization parameters for MSE, SSIM, L-curve and AUC were similar. We also found that a model-based reconstruction including TV and wavelet sparsity did slightly better in terms of AUC than just enforcing data consistency but using these constraints resulted in much better MSE and SSIM. These results suggest that the increased performance in MSE and SSIM over-estimate the improvement in detection performance for the tasks in this paper. The MSE and SSIM metrics show a big difference in performance where the difference in AUC is small. To our knowledge, this is the first time that signal detection with varying backgrounds has been used to optimize constrained reconstruction in MRI.


Assuntos
Algoritmos , Imageamento por Ressonância Magnética , Artefatos , Simulação por Computador , Processamento de Imagem Assistida por Computador
8.
Artigo em Inglês | MEDLINE | ID: mdl-36267661

RESUMO

Task-based assessment of image quality in undersampled magnetic resonance imaging (MRI) using constraints is important because of the need to quantify the effect of the artifacts on task performance. Fluid-attenuated inversion recovery (FLAIR) images are used in detection of small metastases in the brain. In this work we carry out two-alternative forced choice (2-AFC) studies with a small signal known exactly (SKE) but with varying background for reconstructed FLAIR images from undersampled multi-coil data. Using a 4x undersampling and a total variation (TV) constraint we found that the human observer detection performance remained fairly constant for a broad range of values in the regularization parameter before decreasing at large values. Using the TV constraint did not improve task performance. The non- prewhitening eye (NPWE) observer and sparse difference-of-Gaussians (S-DOG) observer with internal noise were used to model human observer detection. The parameters for the NPWE and the internal noise for the S-DOG were chosen to match the average percent correct (PC) in 2-AFC studies for three observers using no regularization. The NPWE model observer tracked the performance of the human observers as the regularization was increased but slightly over-estimated the PC for large amounts of regularization. The S-DOG model observer with internal noise tracked human performace for all levels of regularization studied. To our knowledge this is the first time that model observers have been used to track human observer detection for undersampled MRI.

9.
Magn Reson Med ; 63(4): 849-57, 2010 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-20373385

RESUMO

Noninvasive biomarkers of intracellular accumulation of fat within the liver (hepatic steatosis) are urgently needed for detection and quantitative grading of nonalcoholic fatty liver disease, the most common cause of chronic liver disease in the United States. Accurate quantification of fat with MRI is challenging due the presence of several confounding factors, including T*(2) decay. The specific purpose of this work is to quantify the impact of T*(2) decay and develop a multiexponential T*(2) correction method for improved accuracy of fat quantification, relaxing assumptions made by previous T*(2) correction methods. A modified Gauss-Newton algorithm is used to estimate the T*(2) for water and fat independently. Improved quantification of fat is demonstrated, with independent estimation of T*(2) for water and fat using phantom experiments. The tradeoffs in algorithm stability and accuracy between multiexponential and single exponential techniques are discussed.


Assuntos
Fígado Gorduroso/diagnóstico , Imageamento por Ressonância Magnética/métodos , Imagens de Fantasmas , Algoritmos , Água Corporal , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética/instrumentação , Modelos Teóricos
10.
J Magn Reson Imaging ; 32(2): 493-500, 2010 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-20677283

RESUMO

PURPOSE: To model the theoretical signal-to-noise ratio (SNR) behavior of 3-point chemical shift-based water-fat separation, using spectral modeling of fat, with experimental validation for spin-echo and gradient-echo imaging. The echo combination that achieves the best SNR performance for a given spectral model of fat was also investigated. MATERIALS AND METHODS: Cramér-Rao bound analysis was used to calculate the best possible SNR performance for a given echo combination. Experimental validation in a fat-water phantom was performed and compared with theory. In vivo scans were performed to compare fat separation with and with out spectral modeling of fat. RESULTS: Theoretical SNR calculations for methods that include spectral modeling of fat agree closely with experimental SNR measurements. Spectral modeling of fat more accurately separates fat and water signals, with only a slight decrease in the SNR performance of the water-only image, although with a relatively large decrease in the fat SNR performance. CONCLUSION: The optimal echo combination that provides the best SNR performance for water using spectral modeling of fat is very similar to previous optimizations that modeled fat as a single peak. Therefore, the optimal echo spacing commonly used for single fat peak models is adequate for most applications that use spectral modeling of fat.


Assuntos
Tecido Adiposo/patologia , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Algoritmos , Artefatos , Humanos , Modelos Estatísticos , Óleos/química , Imagens de Fantasmas , Reprodutibilidade dos Testes , Processamento de Sinais Assistido por Computador , Água/química
11.
Med Phys ; 37(5): 2329-40, 2010 May.
Artigo em Inglês | MEDLINE | ID: mdl-20527567

RESUMO

PURPOSE: An iterative tomographic reconstruction algorithm that simultaneously segments and reconstructs the reconstruction domain is proposed and applied to tomographic reconstructions from a sparse number of projection images. METHODS: The proposed algorithm uses a two-phase level set method segmentation in conjunction with an iterative tomographic reconstruction to achieve simultaneous segmentation and reconstruction. The simultaneous segmentation and reconstruction is achieved by alternating between level set function evolutions and per-region intensity value updates. To deal with the limited number of projections, a priori information about the reconstruction is enforced via penalized likelihood function. Specifically, smooth function within each region (piecewise smooth function) and bounded function intensity values for each region are assumed. Such a priori information is formulated into a quadratic objective function with linear bound constraints. The level set function evolutions are achieved by artificially time evolving the level set function in the negative gradient direction; the intensity value updates are achieved by using the gradient projection conjugate gradient algorithm. RESULTS: The proposed simultaneous segmentation and reconstruction results were compared to "conventional" iterative reconstruction (with no segmentation), iterative reconstruction followed by segmentation, and filtered backprojection. Improvements of 6%-13% in the normalized root mean square error were observed when the proposed algorithm was applied to simulated projections of a numerical phantom and to real fan-beam projections of the Catphan phantom, both of which did not satisfy the a priori assumptions. CONCLUSIONS: The proposed simultaneous segmentation and reconstruction resulted in improved reconstruction image quality. The algorithm correctly segments the reconstruction space into regions, preserves sharp edges between different regions, and smoothes the noise within each region. The proposed algorithm framework has the flexibility to be adapted to different a priori constraints while maintaining the benefits achieved by the simultaneous segmentation and reconstruction.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Tomografia/métodos , Algoritmos , Imagens de Fantasmas , Fatores de Tempo
12.
Med Phys ; 35(8): 3597-606, 2008 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-18777920

RESUMO

Separation of water from fat tissues in magnetic resonance imaging is important for many applications because signals from fat tissues often interfere with diagnoses that are usually based on water signal characteristics. Water and fat can be separated with images acquired at different echo time shifts. The three-point method solves for the unknown off-resonance frequency together with the water and fat densities. Noise performance of the method, quantified by the effective number of signals averaged (NSA), is an important metric of the water and fat images. The authors use error propagation theory and Monte Carlo simulation to investigate two common reconstructive approaches: an analytic-solution based estimation and a least-squares estimation. Two water-fat chemical shift (CS) encoding strategies, the symmetric (-theta, 0, theta) and the shifted (0, theta, 2theta) schemes are studied and compared. Results show that NSAs of water and fat can be different and they are dependent on the ratio of intensities of the two species and each of the echo time shifts. The NSA is particularly poor for the symmetric (-theta, 0, theta) CS encoding when the water and fat signals are comparable. This anomaly with equal amounts of water and fat is analyzed in a more intuitive geometric illustration. Theoretical prediction of NSA matches well with simulation results at high signal-to-noise ratio (SNR), while deviation arises at low SNR, which suggests that Monte Carlo simulation may be more appropriate to accurately predict noise performance of the algorithm when SNR is low.


Assuntos
Tecido Adiposo/anatomia & histologia , Artefatos , Água Corporal , Diagnóstico por Imagem/métodos , Imageamento por Ressonância Magnética/métodos , Simulação por Computador , Humanos , Método de Monte Carlo , Ruído
13.
Med Phys ; 33(5): 1372-9, 2006 May.
Artigo em Inglês | MEDLINE | ID: mdl-16752573

RESUMO

Mathematical observers that track human performance can be used to reduce the number of human observer studies needed to optimize imaging systems. The performance of human observers for the detection of a 3.6 mm lung nodule in anatomical backgrounds was measured as a function of varying tomosynthetic angle and compared with mathematical observers. The human observer results showed a dramatic increase in the percent of correct responses, from 80% in the projection images to 96% in the projection images with a tomosynthetic angle of just 3 degrees. This result suggests the potential usefulness of the scanned beam digital x-ray system for this application. Given the small number of images (40) used per tomosynthetic angle and the highly nonstationary statistical nature of the backgrounds, the nonprewhitening eye observer achieved a higher performance than the channelized Hotelling observer using a Laguerre-Gauss basis. The channelized Hotelling observer with internal noise and the eye filter matched to the projection data were shown to track human performance as the tomosynthetic angle changed. The validation of these mathematical observers extends their applicability to the optimization of tomosynthesis systems.


Assuntos
Algoritmos , Inteligência Artificial , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Variações Dependentes do Observador , Controle de Qualidade , Intensificação de Imagem Radiográfica/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
14.
Med Phys ; 31(2): 348-58, 2004 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-15000621

RESUMO

Digital radiography systems can be thought of as continuous linear shift-invariant systems followed by sampling. This view, along with the large number of pixels used for flat-panel systems, has motivated much of the work which attempts to extend figures of merit developed for analog systems, in particular noise equivalent quanta (NEQ) and detective quantum efficiency (DQE). A more general approach looks at the system as a continuous-to-discrete mapping and evaluates the signal-to-noise ratio (SNR) completely from the discrete data. In this paper, we study the effect of presampling blur on these figures of merit for a simple model that assumes that the background fluence is constant and that the blurring of the signal is deterministic. We find that for small signals, even in this idealized model, commonly used DQE/NEQ formulations do not accurately track the behavior of the fully digital SNR. Using these NEQ-based figures of merit would lead to different design decisions than using the ideal SNR. This study is meant to bring attention to the assumptions implicitly made when using Fourier methods.


Assuntos
Intensificação de Imagem Radiográfica/métodos , Algoritmos , Simulação por Computador , Análise de Fourier , Modelos Estatísticos , Distribuição de Poisson , Ecrans Intensificadores para Raios X , Raios X
15.
Med Phys ; 31(2): 359-67, 2004 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-15000622

RESUMO

The current paradigm for evaluating detectors in digital radiography relies on Fourier methods. Fourier methods rely on a shift-invariant and statistically stationary description of the imaging system. The theoretical justification for the use of Fourier methods is based on a uniform background fluence and an infinite detector. In practice, the background fluence is not uniform and detector size is finite. We study the effect of stochastic blurring and structured backgrounds on the correlation between Fourier-based figures of merit and Hotelling detectability. A stochastic model of the blurring leads to behavior similar to what is observed by adding electronic noise to the deterministic blurring model. Background structure does away with the shift invariance. Anatomical variation makes the covariance matrix of the data less amenable to Fourier methods by introducing long-range correlations. It is desirable to have figures of merit that can account for all the sources of variation, some of which are not stationary. For such cases, we show that the commonly used figures of merit based on the discrete Fourier transform can provide an inaccurate estimate of Hotelling detectability.


Assuntos
Intensificação de Imagem Radiográfica/métodos , Algoritmos , Simulação por Computador , Análise de Fourier , Processamento de Imagem Assistida por Computador , Modelos Estatísticos , Distribuição Normal , Distribuição Aleatória , Processos Estocásticos
16.
Phys Med Biol ; 58(5): 1433-46, 2013 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-23399724

RESUMO

We examine the noise advantages of having a computed tomography (CT) detector whose spatial resolution is significantly better (e.g. a factor of 2) than needed for a desired resolution in the reconstructed images. The effective resolution of detectors in x-ray CT is sometimes degraded by binning cells because the small cell size and fine sampling are not needed to achieve the desired resolution (e.g. with flat panel detectors). We studied the effect of the binning process on the noise in the reconstructed images and found that while the images in the absence of noise can be made identical for the native and the binned system, for the same system MTF in the presence of noise, the binned system always results in noisier reconstructed images. The effect of the increased noise in the reconstructed images on lesion detection is scale (frequency content) dependent with a larger difference between the high resolution and binned systems for imaging fine structure (small objects). We show simulated images reconstructed with both systems for representative objects and quantify the impact of the noise on the detection of the lesions based on mathematical observers. Through both subjective assessment of the reconstructed images and the quantification using mathematical observers, we show that for a CT system where the photon noise is dominant, higher resolution in the detectors leads to better noise performance in the reconstructed images at any resolution.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X/métodos , Reprodutibilidade dos Testes
17.
Magn Reson Med ; 58(5): 910-21, 2007 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-17969127

RESUMO

Conventional sensitivity encoding (SENSE) reconstruction is based on equations in the complex domain. However, for many MRI applications only the magnitude is relevant. If there exists an estimate of the underlying phase information, a magnitude-only phase-constrained reconstruction can help to improve the conditioning of the SENSE reconstruction problem. Consequently, this reduces g-factor-related noise enhancement. In previous attempts at phase-constrained SENSE reconstruction, image quality was hampered by strong aliasing artifacts resulting from inadequate phase estimates and high sensitivity to phase errors. If a full-resolution phase image is used, a significant reduction in aliasing errors and better noise properties compared to SENSE can be obtained. An iterative scheme that improves the phase estimate to better approximate the phase is presented. The mathematical framework of the new approach is provided together with comparisons of conventional SENSE, phase-constrained SENSE, and the new phase-refinement method. Both theory and experimental verification demonstrate significantly better noise performance at high reduction factors, i.e., close to the theoretical limit. For applications that need only magnitude data, an iterative phase-constrained SENSE reconstruction can provide substantial SNR improvement over SENSE reconstruction and less artifacts than phase-constrained SENSE.


Assuntos
Imageamento por Ressonância Magnética/métodos , Humanos , Modelos Teóricos
18.
J Magn Reson Imaging ; 25(3): 644-52, 2007 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-17326087

RESUMO

PURPOSE: To combine gradient-echo (GRE) imaging with a multipoint water-fat separation method known as "iterative decomposition of water and fat with echo asymmetry and least squares estimation" (IDEAL) for uniform water-fat separation. Robust fat suppression is necessary for many GRE imaging applications; unfortunately, uniform fat suppression is challenging in the presence of B(0) inhomogeneities. These challenges are addressed with the IDEAL technique. MATERIALS AND METHODS: Echo shifts for three-point IDEAL were chosen to optimize noise performance of the water-fat estimation, which is dependent on the relative proportion of water and fat within a voxel. Phantom experiments were performed to validate theoretical SNR predictions. Theoretical echo combinations that maximize noise performance are discussed, and examples of clinical applications at 1.5T and 3.0T are shown. RESULTS: The measured SNR performance validated theoretical predictions and demonstrated improved image quality compared to unoptimized echo combinations. Clinical examples of the liver, breast, heart, knee, and ankle are shown, including the combination of IDEAL with parallel imaging. Excellent water-fat separation was achieved in all cases. The utility of recombining water and fat images into "in-phase," "out-of-phase," and "fat signal fraction" images is also discussed. CONCLUSION: IDEAL-SPGR provides robust water-fat separation with optimized SNR performance at both 1.5T and 3.0T with multicoil acquisitions and parallel imaging in multiple regions of the body.


Assuntos
Tecido Adiposo/anatomia & histologia , Água Corporal , Imagem Ecoplanar/métodos , Algoritmos , Articulação do Tornozelo/anatomia & histologia , Artefatos , Mama/anatomia & histologia , Feminino , Coração/anatomia & histologia , Humanos , Imageamento Tridimensional/métodos , Articulação do Joelho/anatomia & histologia , Análise dos Mínimos Quadrados , Fígado/anatomia & histologia , Fígado/patologia , Magnetismo , Imagens de Fantasmas , Valores de Referência , Reprodutibilidade dos Testes , Processamento de Sinais Assistido por Computador
19.
J Magn Reson Imaging ; 26(4): 1153-61, 2007 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-17896369

RESUMO

PURPOSE: To describe and demonstrate the feasibility of a novel multiecho reconstruction technique that achieves simultaneous water-fat decomposition and T2* estimation. The method removes interference of water-fat separation with iron-induced T2* effects and therefore has potential for the simultaneous characterization of hepatic steatosis (fatty infiltration) and iron overload. MATERIALS AND METHODS: The algorithm called "T2*-IDEAL" is based on the IDEAL water-fat decomposition method. A novel "complex field map" construct is used to estimate both R2* (1/T2*) and local B(0) field inhomogeneities using an iterative least-squares estimation method. Water and fat are then decomposed from source images that are corrected for both T2* and B(0) field inhomogeneity. RESULTS: It was found that a six-echo multiecho acquisition using the shortest possible echo times achieves an excellent balance of short scan and reliable R2* measurement. Phantom experiments demonstrate the feasibility with high accuracy in R2* measurement. Promising preliminary in vivo results are also shown. CONCLUSION: The T2*-IDEAL technique has potential applications in imaging of diffuse liver disease for evaluation of both hepatic steatosis and iron overload in a single breath-hold.


Assuntos
Tecido Adiposo/metabolismo , Imagem Ecoplanar/métodos , Fígado Gorduroso/patologia , Ferro/metabolismo , Água/química , Algoritmos , Hemocromatose/metabolismo , Hemocromatose/patologia , Humanos , Processamento de Imagem Assistida por Computador/métodos , Análise dos Mínimos Quadrados , Imagens de Fantasmas , Reprodutibilidade dos Testes
20.
J Opt Soc Am A Opt Image Sci Vis ; 23(12): 2989-96, 2006 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-17106455

RESUMO

The information content of data types in time-domain optical tomography is quantified by studying the detectability of signals in the attenuation and reduced scatter coefficients. Detection in both uniform and structured backgrounds is considered, and our results show a complex dependence of spatial detectability maps on the type of signal, data type, and background. In terms of the detectability of lesions, the mean time of arrival of photons and the total number of counts effectively summarize the information content of the full temporal waveform. A methodology for quantifying information content prior to reconstruction without assumptions of linearity is established, and the importance of signal and background characterization is highlighted.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Tomografia Óptica/métodos , Imagens de Fantasmas , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Tomografia Óptica/instrumentação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA