Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 86
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Magn Reson Med ; 76(2): 663-78, 2016 08.
Artículo en Inglés | MEDLINE | ID: mdl-26479724

RESUMEN

PURPOSE: Analytical phantoms have closed form Fourier transform expressions and are used to simulate MRI acquisitions. Existing three-dimensional (3D) analytical phantoms are unable to accurately model shapes of biomedical interest. The goal of this study was to demonstrate that polyhedral analytical phantoms have closed form Fourier transform expressions and can accurately represent 3D biomedical shapes. METHODS: The Fourier transform of a polyhedron was implemented and its accuracy in representing faceted and smooth surfaces was characterized. Realistic anthropomorphic polyhedral brain and torso phantoms were constructed and their use in simulated 3D and two-dimensional (2D) MRI acquisitions was described. RESULTS: Using polyhedra, the Fourier transform of faceted shapes can be computed to within machine precision. Smooth surfaces can be approximated with increasing accuracy by increasing the number of facets in the polyhedron; the additional accumulated numerical imprecision of the Fourier transform of polyhedra with many faces remained small. Simulations of 3D and 2D brain and 2D torso cine acquisitions produced realistic reconstructions free of high frequency edge aliasing compared with equivalent voxelized/rasterized phantoms. CONCLUSION: Analytical polyhedral phantoms are easy to construct and can accurately simulate shapes of biomedical interest. Magn Reson Med 76:663-678, 2016. © 2015 Wiley Periodicals, Inc.


Asunto(s)
Biomimética/métodos , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Modelos Biológicos , Fantasmas de Imagen , Animales , Simulación por Computador , Análisis de Fourier , Humanos , Imagen por Resonancia Magnética/instrumentación , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
2.
IEEE Trans Nucl Sci ; 63(1): 117-129, 2016 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-27182079

RESUMEN

The objectives of this investigation were to model the respiratory motion of solitary pulmonary nodules (SPN) and then use this model to determine the impact of respiratory motion on the localization and detection of small SPN in SPECT imaging for four reconstruction strategies. The respiratory motion of SPN was based on that of normal anatomic structures in the lungs determined from breath-held CT images of a volunteer acquired at two different stages of respiration. End-expiration (EE) and time-averaged (Frame Av) non-uniform-B-spline cardiac torso (NCAT) digital-anthropomorphic phantoms were created using this information for respiratory motion within the lungs. SPN were represented as 1 cm diameter spheres which underwent linear motion during respiration between the EE and end-inspiration (EI) time points. The SIMIND Monte Carlo program was used to produce SPECT projection data simulating Tc-99m depreotide (NeoTect) imaging. The projections were reconstructed using 1) no correction (NC), 2) attenuation correction (AC), 3) resolution compensation (RC), and 4) attenuation correction, scatter correction, and resolution compensation (AC_SC_RC). A human-observer localization receiver operating characteristics (LROC) study was then performed to determine the difference in localization and detection accuracy with and without the presence of respiratory motion. The LROC comparison determined that respiratory motion degrades tumor detection for all four reconstruction strategies, thus correction for SPN motion would be expected to improve detection accuracy. The inclusion of RC in reconstruction improved detection accuracy for both EE and Frame Av over NC and AC. Also the magnitude of the impact of motion was least for AC_SC_RC.

3.
J Biomech Eng ; 137(5): 051004, 2015 May.
Artículo en Inglés | MEDLINE | ID: mdl-25367177

RESUMEN

This paper describes the process in which complex lesion geometries (specified by computer generated perfusion defects) are incorporated in the description of nonlinear finite element (FE) mechanical models used for specifying the motion of the left ventricle (LV) in the 4D extended cardiac torso (XCAT) phantom to simulate gated cardiac image data. An image interrogation process was developed to define the elements in the LV mesh as ischemic or infarcted based upon the values of sampled intensity levels of the perfusion maps. The intensity values were determined for each of the interior integration points of every element of the FE mesh. The average element intensity levels were then determined. The elements with average intensity values below a user-controlled threshold were defined as ischemic or infarcted depending upon the model being defined. For the infarction model cases, the thresholding and interrogation process were repeated in order to define a border zone (BZ) surrounding the infarction. This methodology was evaluated using perfusion maps created by the perfusion cardiac-torso (PCAT) phantom an extension of the 4D XCAT phantom. The PCAT was used to create 3D perfusion maps representing 90% occlusions at four locations (left anterior descending (LAD) segments 6 and 9, left circumflex (LCX) segment 11, right coronary artery (RCA) segment 1) in the coronary tree. The volumes and shapes of the defects defined in the FE mechanical models were compared with perfusion maps produced by the PCAT. The models were incorporated into the XCAT phantom. The ischemia models had reduced stroke volume (SV) by 18-59 ml. and ejection fraction (EF) values by 14-50% points compared to the normal models. The infarction models, had less reductions in SV and EF, 17-54 ml. and 14-45% points, respectively. The volumes of the ischemic/infarcted regions of the models were nearly identical to those volumes obtained from the perfusion images and were highly correlated (R² = 0.99).


Asunto(s)
Circulación Coronaria , Análisis de Elementos Finitos , Ventrículos Cardíacos/fisiopatología , Fenómenos Mecánicos , Modelos Cardiovasculares , Infarto del Miocardio/fisiopatología , Isquemia Miocárdica/fisiopatología , Fenómenos Biomecánicos , Tomografía Computarizada por Emisión de Fotón Único Sincronizada Cardíaca , Ventrículos Cardíacos/diagnóstico por imagen , Humanos , Imagenología Tridimensional , Masculino , Infarto del Miocardio/diagnóstico por imagen , Isquemia Miocárdica/diagnóstico por imagen , Dinámicas no Lineales , Fantasmas de Imagen
4.
Med Phys ; 50(1): 74-88, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36018732

RESUMEN

BACKGROUND: In recent years, low-dose computed tomography (LDCT) has played an important role in the diagnosis CT to reduce the potential adverse effects of X-ray radiation on patients, while maintaining the same diagnostic image quality. PURPOSE: Deep learning (DL)-based methods have played an increasingly important role in the field of LDCT imaging. However, its performance is highly dependent on the consistency of feature distributions between training data and test data. Due to patient's breathing movements during data acquisition, the paired LDCT and normal dose CT images are difficult to obtain from realistic imaging scenarios. Moreover, LDCT images from simulation or clinical CT examination often have different feature distributions due to the pollution by different amounts and types of image noises. If a network model trained with a simulated dataset is used to directly test clinical patients' LDCT data, its denoising performance may be degraded. Based on this, we propose a novel domain-adaptive denoising network (DADN) via noise estimation and transfer learning to resolve the out-of-distribution problem in LDCT imaging. METHODS: To overcome the previous adaptation issue, a novel network model consisting of a reconstruction network and a noise estimation network was designed. The noise estimation network based on a double branch structure is used for image noise extraction and adaptation. Meanwhile, the U-Net-based reconstruction network uses several spatially adaptive normalization modules to fuse multi-scale noise input. Moreover, to facilitate the adaptation of the proposed DADN network to new imaging scenarios, we set a two-stage network training plan. In the first stage, the public simulated dataset is used for training. In the second transfer training stage, we will continue to fine-tune the network model with a torso phantom dataset, while some parameters are frozen. The main reason using the two-stage training scheme is based on the fact that the feature distribution of image content from the public dataset is complex and diverse, whereas the feature distribution of noise pattern from the torso phantom dataset is closer to realistic imaging scenarios. RESULTS: In an evaluation study, the trained DADN model is applied to both the public and clinical patient LDCT datasets. Through the comparison of visual inspection and quantitative results, it is shown that the proposed DADN network model can perform well in terms of noise and artifact suppression, while effectively preserving image contrast and details. CONCLUSIONS: In this paper, we have proposed a new DL network to overcome the domain adaptation problem in LDCT image denoising. Moreover, the results demonstrate the feasibility and effectiveness of the application of our proposed DADN network model as a new DL-based LDCT image denoising method.


Asunto(s)
Algoritmos , Tomografía Computarizada por Rayos X , Humanos , Relación Señal-Ruido , Tomografía Computarizada por Rayos X/métodos , Simulación por Computador , Aprendizaje Automático , Procesamiento de Imagen Asistido por Computador/métodos
5.
IEEE Trans Med Imaging ; 42(9): 2616-2630, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37030685

RESUMEN

Deep learning (DL) based image processing methods have been successfully applied to low-dose x-ray images based on the assumption that the feature distribution of the training data is consistent with that of the test data. However, low-dose computed tomography (LDCT) images from different commercial scanners may contain different amounts and types of image noise, violating this assumption. Moreover, in the application of DL based image processing methods to LDCT, the feature distributions of LDCT images from simulation and clinical CT examination can be quite different. Therefore, the network models trained with simulated image data or LDCT images from one specific scanner may not work well for another CT scanner and image processing task. To solve such domain adaptation problem, in this study, a novel generative adversarial network (GAN) with noise encoding transfer learning (NETL), or GAN-NETL, is proposed to generate a paired dataset with a different noise style. Specifically, we proposed a method to perform noise encoding operator and incorporate it into the generator to extract a noise style. Meanwhile, with a transfer learning (TL) approach, the image noise encoding operator transformed the noise type of the source domain to that of the target domain for realistic noise generation. One public and two private datasets are used to evaluate the proposed method. Experiment results demonstrated the feasibility and effectiveness of our proposed GAN-NETL model in LDCT image synthesis. In addition, we conduct additional image denoising study using the synthesized clinical LDCT data, which verified the merit of the proposed synthesis in improving the performance of the DL based LDCT processing method.


Asunto(s)
Aprendizaje Profundo , Algoritmos , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Simulación por Computador , Relación Señal-Ruido
6.
AJR Am J Roentgenol ; 198(6): 1380-6, 2012 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-22623552

RESUMEN

OBJECTIVE: The aim of this in vitro study was to examine the capability of three protocols of dual-energy CT imaging in distinguishing calcium oxalate, calcium phosphate, and uric acid kidney stones. MATERIALS AND METHODS: A total of 48 calcium oxalate, calcium phosphate, and uric acid human kidney stone samples were placed in individual containers inside a cylindric water phantom and imaged with a dual-energy CT scanner using the following three scanning protocols of different combinations of tube voltage, with and without a tin filter: 80 and 140 kVp without a tin filter, 100 and 140 kVp with a tin filter, and 80 and 140 kVp with a tin filter. The mean attenuation value (in Hounsfield units) of each stone was recorded in both low- and high-energy CT images in each protocol. The dual-energy ratio of the mean attenuation values of each stone was computed for each protocol. RESULTS: For all three protocols, the uric acid stones were significantly different (p < 0.001) from the calciferous stones according to their dual-energy ratio values. For differentiating calcium oxalate and calcium phosphate stones, the difference between their dual-energy ratio values was statistically significant, with different degrees of significance (range, p < 0.001 to p = 0.03) for all three protocols. On the basis of the values of the area under receiver operating characteristic curve (AUC) of calcified stone differentiation, the three protocols were ranked in the following order: the 80- and 140-kVp tin filter protocol (AUC, 0.996), the 100- and 140-kVp tin filter protocol (AUC, 0.918), and the 80- and 140-kVp protocol (AUC, 0.871). CONCLUSION: The tin filter added to the high-energy tube and the use of a wider dual-energy difference are important for improving the stone differentiation capability of dual-energy CT imaging.


Asunto(s)
Cálculos Renales/química , Cálculos Renales/diagnóstico por imagen , Estaño , Tomografía Computarizada por Rayos X/instrumentación , Análisis de Varianza , Oxalato de Calcio/análisis , Fosfatos de Calcio/análisis , Humanos , Técnicas In Vitro , Fantasmas de Imagen , Curva ROC , Ácido Úrico/análisis
7.
Med Phys ; 38(3): 1307-12, 2011 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-21520842

RESUMEN

PURPOSE: To develop a method to reconstruct an interior region-of-interest (ROI) image with sufficient accuracy that uses differentiated backprojection (DBP) projection onto convex sets (POCS) [H. Kudo et al., "Tiny a priori knowledge solves the interior problem in computed tomography," Phys. Med. Biol. 53, 2207-2231 (2008)] and a tiny knowledge that there exists a nearly piecewise constant subregion. METHODS: The proposed method first employs filtered backprojection to reconstruct an image on which a tiny region P with a small variation in the pixel values is identified inside the ROI. Total variation minimization [H. Yu and G. Wang, "Compressed sensing based interior tomography," Phys. Med. Biol. 54, 2791-2805 (2009); W. Han et al., "A general total variation minimization theorem for compressed sensing based interior tomography," Int. J. Biomed. Imaging 2009, Article 125871 (2009)] is then employed to obtain pixel values in the subregion P, which serve as a priori knowledge in the next step. Finally, DBP-POCS is performed to reconstruct f(x,y) inside the ROI. Clinical data and the reconstructed image obtained by an x-ray computed tomography system (SOMATOM Definition; Siemens Healthcare) were used to validate the proposed method. The detector covers an object with a diameter of approximately 500 mm. The projection data were truncated either moderately to limit the detector coverage to Ø 350 mm of the object or severely to cover Ø199 mm. Images were reconstructed using the proposed method. RESULTS: The proposed method provided ROI images with correct pixel values in all areas except near the edge of the ROI. The coefficient of variation, i.e., the root mean square error divided by the mean pixel values, was less than 2.0% or 4.5% with the moderate or severe truncation cases, respectively, except near the boundary of the ROI. CONCLUSIONS: The proposed method allows for reconstructing interior ROI images with sufficient accuracy with a tiny knowledge that there exists a nearly piecewise constant subregion.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Humanos , Reproducibilidad de los Resultados , Tomografía Computarizada por Rayos X
8.
Med Phys ; 38(2): 1089-102, 2011 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-21452746

RESUMEN

PURPOSE: Recently, photon counting x-ray detectors (PCXDs) with energy discrimination capabilities have been developed for potential use in clinical computed tomography (CT) scanners. These PCXDs have great potential to improve the quality of CT images due to the absence of electronic noise and weights applied to the counts and the additional spectral information. With high count rates encountered in clinical CT, however, coincident photons are recorded as one event with a higher or lower energy due to the finite speed of the PCXD. This phenomenon is called a "pulse pileup event" and results in both a loss of counts (called "deadtime losses") and distortion of the recorded energy spectrum. Even though the performance of PCXDs is being improved, it is essential to develop algorithmic methods based on accurate models of the properties of detectors to compensate for these effects. To date, only one PCXD (model DXMCT-1, DxRay, Inc., Northridge, CA) has been used for clinical CT studies. The aim of that study was to evaluate the agreement between data measured by DXMCT-1 and those predicted by analytical models for the energy response, the deadtime losses, and the distorted recorded spectrum caused by pulse pileup effects. METHODS: An energy calibration was performed using 99mTc (140 keV), 57Co (122 keV), and an x-ray beam obtained with four x-ray tube voltages (35, 50, 65, and 80 kVp). The DXMCT-1 was placed 150 mm from the x-ray focal spot; the count rates and the spectra were recorded at various tube current values from 10 to 500 microA for a tube voltage of 80 kVp. Using these measurements, for each pulse height comparator we estimated three parameters describing the photon energy-pulse height curve, the detector deadtime tau, a coefficient k that relates the x-ray tube current I to an incident count rate a by a = k x I, and the incident spectrum. The mean pulse shape of all comparators was acquired in a separate study and was used in the model to estimate the distorted recorded spectrum. The agreement between data measured by the DXMCT-1 and those predicted by the models was quantified by the coefficient of variation (COV), i.e., the root mean square difference divided by the mean of the measurement. RESULTS: Photon energy versus pulse height curves calculated with an analytical model and those measured using the DXMCT-1 were in agreement within 0.2% in terms of the COV. The COV between the output count rates measured and those predicted by analytical models was 2.5% for deadtime losses of up to 60%. The COVs between spectra measured and those predicted by the detector model were within 3.7%-7.2% with deadtime losses of 19%-46%. CONCLUSIONS: It has been demonstrated that the performance of the DXMCT-1 agreed exceptionally well with the analytical models regarding the energy response, the count rate, and the recorded spectrum with pulse pileup effects. These models will be useful in developing methods to compensate for these effects in PCXD-based clinical CT systems.


Asunto(s)
Modelos Teóricos , Fotones , Tomografía Computarizada por Rayos X/métodos , Reproducibilidad de los Resultados
9.
J Gerontol A Biol Sci Med Sci ; 76(2): 211-215, 2021 01 18.
Artículo en Inglés | MEDLINE | ID: mdl-32585682

RESUMEN

Chronic inflammation (CI) in older adults is associated with reduced health span and life span. Interleukin-6 (IL-6) is one CI marker that is strongly associated with adverse health outcomes and mortality in aging. We have previously characterized a mouse model of frailty and chronic inflammatory pathway activation (IL-10tm/tm, IL-10 KO) that demonstrates the upregulation of numerous proinflammatory cytokines, including IL-6. We sought to identify a more specific role for IL-6 within the context of CI and aging and developed a mouse with targeted deletion of both IL-10 and IL-6 (IL-10tm/tm/IL-6tm/tm, DKO). Phenotypic characteristics, cytokine measurements, cardiac myocardial oxygen consumption, physical function, and survival were measured in DKO mice and compared to age- and gender-matched IL-10 KO and wild-type mice. Our findings demonstrate that selective knockdown of IL-6 in a frail mouse with CI resulted in the reversal of some of the CI-associated changes. We observed increased protective mitochondrial-associated lipid metabolites, decreased cardiac oxaloacetic acid, improved myocardial oxidative metabolism, and better short-term functional performance in DKO mice. However, the DKO mice also demonstrated higher mortality. This work shows the pleiotropic effects of IL-6 on aging and frailty.


Asunto(s)
Envejecimiento/fisiología , Inflamación/fisiopatología , Interleucina-6/deficiencia , Envejecimiento/genética , Animales , Enfermedad Crónica , Ciclo del Ácido Cítrico , Modelos Animales de Enfermedad , Femenino , Glucólisis , Inflamación/genética , Interleucina-10/deficiencia , Interleucina-10/genética , Interleucina-10/fisiología , Interleucina-6/genética , Interleucina-6/fisiología , Lípidos/sangre , Masculino , Ratones , Ratones Endogámicos C57BL , Ratones Noqueados , Mitocondrias Cardíacas/metabolismo , Fosforilación Oxidativa
10.
Mol Imaging ; 9(2): 108-16, 2010 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-20236603

RESUMEN

We investigated whether small-animal positron emission tomography (PET) could be used in combination with computed tomography (CT) imaging techniques for longitudinal monitoring of the injured spinal cord. In adult female Sprague-Dawley rats (n = 6), the ninth thoracic (T9) spinal cord segment was exposed by laminectomy and subsequently contused using the Infinite Horizon impactor (Precision System and Instrumentation, Lexington, KY) at 225 kDyn. In control rats (n = 4), the T9 spinal cord was exposed by laminectomy but not contused. At 0.5 hours and 3, 7, and 21 days postinjury, 2-[(18)F]fluoro-2-deoxy-d-glucose ([(18)F]FDG) was given intravenously followed 1 hour later by sequential PET and CT. Regions of interest (ROIs) at T9 (contused) and T6 (uninjured) spinal cord segments were manually defined on CT images and aided by fiduciary markers superimposed onto the coregistered PET images. Monte Carlo simulation revealed that about 33% of the activity in the ROIs was due to spillover from adjacent hot areas. A simulation-based partial-volume compensation (PVC) method was developed and used to correct for this spillover effect. With PET-CT, combined with PVC, we were able to serially measure standardized uptake values of the T9 and T6 spinal cord segments and reveal small, but significant, differences. This approach may become a tool to assess the efficacy of spinal cord repair strategies.


Asunto(s)
Fluorodesoxiglucosa F18 , Tomografía de Emisión de Positrones/métodos , Traumatismos de la Médula Espinal/diagnóstico por imagen , Animales , Simulación por Computador , Femenino , Fluorodesoxiglucosa F18/farmacocinética , Método de Montecarlo , Ratas , Ratas Sprague-Dawley , Médula Espinal/diagnóstico por imagen , Médula Espinal/metabolismo , Traumatismos de la Médula Espinal/metabolismo
11.
Med Phys ; 37(4): 1610-8, 2010 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-20443482

RESUMEN

PURPOSE: A rotating multi-segment slant-hole (RMSSH) collimator is able to provide much higher (approximately 3 times for four-segment collimator with 30 degrees slant angle) sensitivity than a parallel-hole (PH) collimator with the similar spatial resolution for imaging small organs such as the heart and the breast. In this article, the authors evaluated the performance of myocardial perfusion SPECT (MPS) using a RMSSH collimator compared to MPS using the low-energy high-resolution parallel-hole collimators. METHODS: The authors conducted computer simulation studies using the NURBS-based cardiac-torso phantom, receiver operative characteristic (ROC) analysis using the channelized Hotelling observer, physical phantom experiments, and pilot patient studies to evaluate the performance of MPS using a rotating four-segment slant-hole (R4SSH) collimator with respect to MPS using a PH collimator. RESULTS: In the simulation study, the R4SSH MPS provides images with superior contrast-noise trade-off than those of PH MPS with the same acquisition time. The defect detectability in terms of the largest area under the ROC curve for R4SSH MPS is significantly higher than those of PH MPS with p-values < 0.01. In the phantom experiments, the R4SSH MPS images with 7.5 min acquisition had similar noise level and overall image quality as those of PH MPS with 21 min acquisition. Pilot patient studies showed that with the same acquisition time, the R4SSH SPECT using a single-head camera gave images with similar quality as those of PH SPECT using a dual-head camera. CONCLUSIONS: The RMSSH SPECT has a potential to improve the coronary artery disease detection and workflow of SPECT imaging acquisition due to the high sensitivity property of the RMSSH collimator.


Asunto(s)
Imagen de Perfusión Miocárdica/métodos , Miocardio/patología , Tomografía Computarizada de Emisión de Fotón Único/métodos , Algoritmos , Simulación por Computador , Diseño de Equipo , Femenino , Humanos , Interpretación de Imagen Asistida por Computador/instrumentación , Interpretación de Imagen Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador , Masculino , Fantasmas de Imagen , Proyectos Piloto , Curva ROC , Reproducibilidad de los Resultados
12.
IEEE Trans Nucl Sci ; 57(5): 2571, 2010 Aug 26.
Artículo en Inglés | MEDLINE | ID: mdl-21516240

RESUMEN

Using a heart motion observer, we compared the performance of two image reconstruction techniques, a 3D OS-EM algorithm with post Butterworth spatial filtering and a 4D MAP-RBI-EM algorithm. The task was to classify gated myocardial perfusion (GMP) SPECT images of beating hearts with or without regional motion abnormalities. Noise-free simulated GMP SPECT projection data was generated from two 4D NCAT beating heart phantom models, one with normal motion and the other with a 50% motion defect in a pie-shaped wedge region-of-interest (ROI) in the anterior-lateral left ventricular wall. The projection data were scaled to clinical GMP SPECT count level before Poisson noise was simulated to generate 40 noise realizations. The noise-free and noisy projection data were reconstructed using the two reconstruction algorithms, parameters chosen to optimize the tradeoff between image bias and noise. As a motion observer, a 3D motion estimation method previously developed was applied to estimate the radial motion on the ROI from two adjacent gates. The receiver operating characteristic (ROC) curves were computed for radial motion magnitudes corresponding to each reconstruction technique. The area under the ROC curve (AUC) was calculated as an index for classification of regional motion. The reconstructed images with better bias and noise tradeoff were found to offer better classification for hearts with or without regional motion defects. The 3D cardiac motion estimation algorithm, serving as a heart motion observer, was better able to distinguish the abnormal from the normal regional motion in GMP SPECT images obtained from the 4D MAP-RBI-EM algorithm than from the 3D OS-EM algorithm with post Butterworth spatial filtering.

13.
J Med Imaging (Bellingham) ; 7(4): 042805, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-32313817

RESUMEN

The accelerating complexity and variety of medical imaging devices and methods have outpaced the ability to evaluate and optimize their design and clinical use. This is a significant and increasing challenge for both scientific investigations and clinical applications. Evaluations would ideally be done using clinical imaging trials. These experiments, however, are often not practical due to ethical limitations, expense, time requirements, or lack of ground truth. Virtual clinical trials (VCTs) (also known as in silico imaging trials or virtual imaging trials) offer an alternative means to efficiently evaluate medical imaging technologies virtually. They do so by simulating the patients, imaging systems, and interpreters. The field of VCTs has been constantly advanced over the past decades in multiple areas. We summarize the major developments and current status of the field of VCTs in medical imaging. We review the core components of a VCT: computational phantoms, simulators of different imaging modalities, and interpretation models. We also highlight some of the applications of VCTs across various imaging modalities.

14.
J Nucl Med ; 50(5): 667-70, 2009 May.
Artículo en Inglés | MEDLINE | ID: mdl-19372476

RESUMEN

Because of the development of gene knockout and transgenic technologies, small animals, such as mice and rats, have become the most widely used animals for cardiovascular imaging studies. Imaging can provide a method to serially evaluate the effect of a particular genetic mutation or pharmacologic therapy (1). In addition, imaging can be used as a noninvasive screening tool for particular cardiovascular phenotypes. Outcome measures of therapeutic efficacy, such as ejection fraction, left ventricular mass, and ventricular volume, can be determined noninvasively as well. Furthermore, small-animal imaging can be used to develop and test new molecular imaging probes (2,3). However, the small size of the heart and rapid heart rate of murine models create special challenges for cardiovascular imaging.


Asunto(s)
Diagnóstico por Imagen/tendencias , Diagnóstico por Imagen/veterinaria , Corazón/diagnóstico por imagen , Modelos Animales , Miocardio/patología , Animales , Radiografía , Cintigrafía
15.
Proc IEEE Inst Electr Electron Eng ; 97(12): 1954-1968, 2009 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26472880

RESUMEN

Recent work in the development of computerized phantoms has focused on the creation of ideal "hybrid" models that seek to combine the realism of a patient-based voxelized phantom with the flexibility of a mathematical or stylized phantom. We have been leading the development of such computerized phantoms for use in medical imaging research. This paper will summarize our developments dating from the original four-dimensional (4-D) Mathematical Cardiac-Torso (MCAT) phantom, a stylized model based on geometric primitives, to the current 4-D extended Cardiac-Torso (XCAT) and Mouse Whole-Body (MOBY) phantoms, hybrid models of the human and laboratory mouse based on state-of-the-art computer graphics techniques. This paper illustrates the evolution of computerized phantoms toward more accurate models of anatomy and physiology. This evolution was catalyzed through the introduction of nonuniform rational b-spline (NURBS) and subdivision (SD) surfaces, tools widely used in computer graphics, as modeling primitives to define a more ideal hybrid phantom. With NURBS and SD surfaces as a basis, we progressed from a simple geometrically based model of the male torso (MCAT) containing only a handful of structures to detailed, whole-body models of the male and female (XCAT) anatomies (at different ages from newborn to adult), each containing more than 9000 structures. The techniques we applied for modeling the human body were similarly used in the creation of the 4-D MOBY phantom, a whole-body model for the mouse designed for small animal imaging research. From our work, we have found the NURBS and SD surface modeling techniques to be an efficient and flexible way to describe the anatomy and physiology for realistic phantoms. Based on imaging data, the surfaces can accurately model the complex organs and structures in the body, providing a level of realism comparable to that of a voxelized phantom. In addition, they are very flexible. Like stylized models, they can easily be manipulated to model anatomical variations and patient motion. With the vast improvement in realism, the phantoms developed in our lab can be combined with accurate models of the imaging process (SPECT, PET, CT, magnetic resonance imaging, and ultrasound) to generate simulated imaging data close to that from actual human or animal subjects. As such, they can provide vital tools to generate predictive imaging data from many different subjects under various scanning parameters from which to quantitatively evaluate and improve imaging devices and techniques. From the MCAT to XCAT, we will demonstrate how NURBS and SD surface modeling have resulted in a major evolutionary advance in the development of computerized phantoms for imaging research.

16.
Phys Med Biol ; 54(13): 4325-39, 2009 Jul 07.
Artículo en Inglés | MEDLINE | ID: mdl-19531850

RESUMEN

This work applies a previously developed analytical algorithm to the reconstruction problem in a rotating multi-segment slant-hole (RMSSH) SPECT system. The RMSSH collimator has greater detection efficiency than the parallel-hole collimator with comparable spatial resolution at the expense of limited common volume-of-view (CVOV) and is therefore suitable for detecting low-contrast lesions in breast, cardiac and brain imaging. The absorption of gamma photons in both the human breast and brain can be assu- med to follow an exponential rule with a constant attenuation coefficient. In this work, the RMSSH SPECT data of a digital NCAT phantom with breast attachment are modeled as the uniformly attenuated Radon transform of the activity distribution. These data are reconstructed using an analytical algorithm called the DBH method, which is an acronym for the procedure of differentiation backprojection followed by a finite weighted inverse Hilbert transform. The projection data are first differentiated along a specific direction in the projection space and then backprojected to the image space. The result from this first step is equal to a one-dimensional finite weighted Hilbert transform of the object; this transform is then numerically inverted to obtain the reconstructed image. With the limited CVOV of the RMSSH collimator, the detector captures gamma photon emissions from the breast and from parts of the torso. The simulation results show that the DBH method is capable of exactly reconstructing the activity within a well-defined region-of-interest (ROI) within the breast if the activity is confined to the breast or if the activity outside the CVOV is uniformly attenuated for each measured projection, while a conventional filtered backprojection algorithm only reconstructs the high frequency components of the activity function in the same geometry.


Asunto(s)
Algoritmos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Tomografía Computarizada de Emisión de Fotón Único/instrumentación , Tomografía Computarizada de Emisión de Fotón Único/métodos , Diseño Asistido por Computadora , Diseño de Equipo , Análisis de Falla de Equipo , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
17.
Phys Med Biol ; 54(10): 3161-71, 2009 May 21.
Artículo en Inglés | MEDLINE | ID: mdl-19420417

RESUMEN

The purpose of this study is to optimize the dynamic Rb-82 cardiac PET acquisition and reconstruction protocols for maximum myocardial perfusion defect detection using realistic simulation data and task-based evaluation. Time activity curves (TACs) of different organs under both rest and stress conditions were extracted from dynamic Rb-82 PET images of five normal patients. Combined SimSET-GATE Monte Carlo simulation was used to generate nearly noise-free cardiac PET data from a time series of 3D NCAT phantoms with organ activities modeling different pre-scan delay times (PDTs) and total acquisition times (TATs). Poisson noise was added to the nearly noise-free projections and the OS-EM algorithm was applied to generate noisy reconstructed images. The channelized Hotelling observer (CHO) with 32x32 spatial templates corresponding to four octave-wide frequency channels was used to evaluate the images. The area under the ROC curve (AUC) was calculated from the CHO rating data as an index for image quality in terms of myocardial perfusion defect detection. The 0.5 cycle cm(-1) Butterworth post-filtering on OS-EM (with 21 subsets) reconstructed images generates the highest AUC values while those from iteration numbers 1 to 4 do not show different AUC values. The optimized PDTs for both rest and stress conditions are found to be close to the cross points of the left ventricular chamber and myocardium TACs, which may promote an individualized PDT for patient data processing and image reconstruction. Shortening the TATs for

Asunto(s)
Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagen de Perfusión Miocárdica/métodos , Tomografía de Emisión de Positrones/métodos , Radioisótopos , Rubidio , Disfunción Ventricular Izquierda/diagnóstico por imagen , Algoritmos , Humanos , Control de Calidad , Radiofármacos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
18.
IEEE Trans Image Process ; 18(6): 1228-38, 2009 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-19398410

RESUMEN

We consider electronic noise modeling in tomographic image reconstruction when the measured signal is the sum of a Gaussian distributed electronic noise component and another random variable whose log-likelihood function satisfies a certain linearity condition. Examples of such likelihood functions include the Poisson distribution and an exponential dispersion (ED) model that can approximate the signal statistics in integration mode X-ray detectors. We formulate the image reconstruction problem as a maximum-likelihood estimation problem. Using an expectation-maximization approach, we demonstrate that a reconstruction algorithm can be obtained following a simple substitution rule from the one previously derived without electronic noise considerations. To illustrate the applicability of the substitution rule, we present examples of a fully iterative reconstruction algorithm and a sinogram smoothing algorithm both in transmission CT reconstruction when the measured signal contains additive electronic noise. Our simulation studies show the potential usefulness of accurate electronic noise modeling in low-dose CT applications.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Modelos Estadísticos , Procesamiento de Señales Asistido por Computador , Tomografía Computarizada por Rayos X , Algoritmos , Simulación por Computador , Fantasmas de Imagen , Distribución de Poisson
19.
IEEE Trans Nucl Sci ; 56(5): 2636-2643, 2009.
Artículo en Inglés | MEDLINE | ID: mdl-21643442

RESUMEN

Our goal is to study the trade-off between image degradation and improved detection efficiency and resolution from allowing multiplexing in multi-pinhole (MPH) SPECT, and to determine the optimal pinhole number for MPH design. We used an analytical 3D MPH projector and two digitized phantoms: the mouse whole body (MOBY) phantom and a hot sphere phantom to generate noise-free and noisy projections, simulating pinhole collimators fitted with pre-studied pinhole patterns. We performed three schemes to achieve different degrees of multiplexing: 1. Fixed magnification and detection efficiency; 2. Fixed detection efficiency and changed magnification; 3. Fixed magnification and changed detection efficiency. We generated various noisy data sets by simulating Poisson noise using differently scaled noise-free projections and obtained 20 noise realizations for each setting. All datasets were reconstructed using 3D MPH ML-EM reconstruction method. We analyzed the quantitative accuracy by the normalized-mean-square-error. We evaluated the image contrast for the hot sphere phantom simulation, and also the image noise by the average normalized-standard-deviation of certain pixels for different degrees of multiplexing. Generally, no apparent artifacts were observed in the reconstructed images, illustrating the effectiveness of reconstructions. Bias increased for increased degree of multiplexing. Contrast was not significantly affected by multiplexing in the specific simulation scheme (1). Scheme (2) showed that excessive multiplexing to improve image resolution would not improve the overall trade-off of bias and noise compared to no multiplexing. However, scheme (3) showed that when comparing to no multiplexing, the trade-off improved initially with increased multiplexing by allowing more number of pinholes to improve detection efficiency. The trade-off reached a maximum and decreased with further multiplexing due to image degradation from increased bias. The optimal pinhole number was 7 for a compact camera with size of 12 cm × 12 cm and 9 for a standard gamma camera with size of 40 cm × 40 cm in this scheme. We conclude that the gains in improved detection efficiency and resolution by increased multiplexing are offset by increased image degradations. All the aforementioned factors must be considered in the optimum MPH collimator design for small animal SPECT imaging.

20.
IEEE Trans Nucl Sci ; 56(1): 91-96, 2009 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-20700481

RESUMEN

The purpose of this study is to investigate optimal respiratory gating schemes using different numbers of gates and placements within the respiratory cycle for reduction of respiratory motion (RM) artifacts in myocardial SPECT. The 4D NCAT phantom with its realistic respiratory model was used to generate 96 3D phantoms equally spaced over a complete respiratory cycle modeling the activity distribution from a typical Tc-99m Sestamibi study with the maximum movement of the diaphragm set at 2 cm. The 96 time frames were grouped to simulate various gating schemes (1, 3, 6, and 8 equally spaced gates) and different placements of the gates within a respiratory cycle. Projection data, including effects of attenuation, collimator-detector response and scatter, from each respiratory gate and each gating scheme were generated and reconstructed using the OS-EM algorithm with correction for attenuation. Attenuation correction was done with average attenuation maps for each gate and over the entire respiratory cycle. Bull's-eye polar plots generated from the reconstructed images for each gate were analyzed and compared to assess the effect of RM. RM artifacts were found to be reduced the most when going from the ungated to the gated case. No significant difference was found in attenuation compensated images between the use of gated and average attenuation maps. Our results indicate that the extent of RM artifacts is dependent on the placement of the gates in a gating scheme. Artifacts are less prominent in gates near end-expiration and more prominent near end-inspiration. This dependence on gate placement decreases when going to higher numbers of gates (6 and higher). However, it is possible to devise a non-uniform time interval gating scheme with 3 gates that will produce results similar to those using a higher number of gates. We conclude that respiratory gating is an effective way to reduce RM artifacts. Effective implementation of respiratory gating to further improve quantitative myocardial SPECT requires optimization of the gating scheme based on the amount of respiratory motion of the heart during each gate and the placement of the gates within the respiratory cycle.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA