Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
BMC Med Inform Decis Mak ; 21(1): 114, 2021 04 03.
Artículo en Inglés | MEDLINE | ID: mdl-33812383

RESUMEN

BACKGROUND: Artificial intelligence (AI) research is highly dependent on the nature of the data available. With the steady increase of AI applications in the medical field, the demand for quality medical data is increasing significantly. We here describe the development of a platform for providing and sharing digital pathology data to AI researchers, and highlight challenges to overcome in operating a sustainable platform in conjunction with pathologists. METHODS: Over 3000 pathological slides from five organs (liver, colon, prostate, pancreas and biliary tract, and kidney) in histologically confirmed tumor cases by pathology departments at three hospitals were selected for the dataset. After digitalizing the slides, tumor areas were annotated and overlaid onto the images by pathologists as the ground truth for AI training. To reduce the pathologists' workload, AI-assisted annotation was established in collaboration with university AI teams. RESULTS: A web-based data sharing platform was developed to share massive pathological image data in 2019. This platform includes 3100 images, and 5 pre-processing algorithms for AI researchers to easily load images into their learning models. DISCUSSION: Due to different regulations among countries for privacy protection, when releasing internationally shared learning platforms, it is considered to be most prudent to obtain consent from patients during data acquisition. CONCLUSIONS: Despite limitations encountered during platform development and model training, the present medical image sharing platform can steadily fulfill the high demand of AI developers for quality data. This study is expected to help other researchers intending to generate similar platforms that are more effective and accessible in the future.


Asunto(s)
Inteligencia Artificial , Neoplasias , Algoritmos , Humanos , Masculino
2.
Neuroimage ; 172: 874-885, 2018 05 15.
Artículo en Inglés | MEDLINE | ID: mdl-29162523

RESUMEN

Neuromelanin (NM) is an endogenous iron chelating molecule of pigmented neurons in the human substantia nigra (SN). Along with the increase in iron deposition, the reduction in NM-containing dopaminergic neurons and the variation of iron load on NM are generally considered to be important factors participating to pathogenesis of Parkinson's disease (PD). The aim of this study was to non-invasively delineate the spatial distributions of paramagnetic magnetic susceptibility perturbers, such as NM-iron complex and ferric iron in SN. Multiple quantitative MR parameters of T1, T2, T2*, susceptibility weighted image (SWI), quantitative susceptibility map (QSM), and T1 weighted image with magnetization transfer (MT) effects were acquired for six post-mortem SN samples without a history of neurological disease. Co-registered quantitative histological validations were performed to identify and correlate NM pigments, iron deposits, and myelin distributions with respect to associated MR parameters. The regions with NM pigments and iron deposits showed positive magnetic susceptibility (paramagnetic) values, while myelinated areas showed negative magnetic susceptibility (diamagnetic) values from the QSM. The region of reduced T2 values in SN mostly coincided with high iron deposits, but not necessarily with the NM pigments. The correlations between T2*/T2 (or T2*/T22) values and NM pigments were higher than those between T2* values and NM pigments, due to the effective size differences between NM-iron complex and ferric iron. Consequently, separate segmentations of ferric iron from the T2 map and NM-iron complex from the T2*/T2 map (or T2*/T22 map) were possible with the boundary of the SN determined from the T1 weighted image.


Asunto(s)
Hierro/análisis , Imagen por Resonancia Magnética/métodos , Melaninas/análisis , Sustancia Negra/química , Sustancia Negra/diagnóstico por imagen , Adulto , Anciano , Anciano de 80 o más Años , Autopsia , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Masculino , Persona de Mediana Edad
3.
Sensors (Basel) ; 18(8)2018 Aug 20.
Artículo en Inglés | MEDLINE | ID: mdl-30127306

RESUMEN

Multimodal biometrics are promising for providing a strong security level for personal authentication, yet the implementation of a multimodal biometric system for practical usage need to meet such criteria that multimodal biometric signals should be easy to acquire but not easily compromised. We developed a wearable wrist band integrated with multispectral skin photomatrix (MSP) and electrocardiogram (ECG) sensors to improve the issues of collectability, performance and circumvention of multimodal biometric authentication. The band was designed to ensure collectability by sensing both MSP and ECG easily and to achieve high authentication performance with low computation, efficient memory usage, and relatively fast response. Acquisition of MSP and ECG using contact-based sensors could also prevent remote access to personal data. Personal authentication with multimodal biometrics using the integrated wearable wrist band was evaluated in 150 subjects and resulted in 0.2% equal error rate ( EER ) and 100% detection probability at 1% FAR (false acceptance rate) ( PD . 1 ), which is comparable to other state-of-the-art multimodal biometrics. An additional investigation with a separate MSP sensor, which enhanced contact with the skin, along with ECG reached 0.1% EER and 100% PD . 1 , showing a great potential of our in-house wearable band for practical applications. The results of this study demonstrate that our newly developed wearable wrist band may provide a reliable and easy-to-use multimodal biometric solution for personal authentication.


Asunto(s)
Identificación Biométrica/instrumentación , Electrocardiografía/instrumentación , Dispositivos Electrónicos Vestibles , Muñeca , Humanos
4.
IEEE Trans Nucl Sci ; 60(5): 3373-3382, 2013 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-24966415

RESUMEN

This study was to obtain voxel-wise PET accuracy and precision using tissue-segmentation for attenuation correction. We applied multiple thresholds to the CTs of 23 patients to classify tissues. For six of the 23 patients, MR images were also acquired. The MR fat/in-phase ratio images were used for fat segmentation. Segmented tissue classes were used to create attenuation maps, which were used for attenuation correction in PET reconstruction. PET bias images were then computed using the PET reconstructed with the original CT as the reference. We registered the CTs for all the patients and transformed the corresponding bias images accordingly. We then obtained the mean and standard deviation bias atlas using all the registered bias images. Our CT-based study shows that four-class segmentation (air, lungs, fat, other tissues), which is available on most PET-MR scanners, yields 15.1%, 4.1%, 6.6%, and 12.9% RMSE bias in lungs, fat, non-fat soft-tissues, and bones, respectively. An accurate fat identification is achievable using fat/in-phase MR images. Furthermore, we have found that three-class segmentation (air, lungs, other tissues) yields less than 5% standard deviation of bias within the heart, liver, and kidneys. This implies that three-class segmentation can be sufficient to achieve small variation of bias for imaging these three organs. Finally, we have found that inter- and intra-patient lung density variations contribute almost equally to the overall standard deviation of bias within the lungs.

5.
PLoS One ; 18(11): e0293338, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37917655

RESUMEN

Modern deep neural networks cannot be often trained on a single GPU due to large model size and large data size. Model parallelism splits a model for multiple GPUs, but making it scalable and seamless is challenging due to different information sharing among GPUs with communication overhead. Specifically, we identify two key issues to make the parallelism being inefficient and inaccurate; an efficient pipelining technique is crucial to maximize GPU utilization and normalizations in deep neural networks may affect the performance due to different statistics sharing of mini-batch. In this work, we address these issues by investigating efficient pipelining for model parallelism and effective normalizations in model / data parallelisms when training a model with large mini-batch in multiple GPUs so that the model performance in accuracy can not be compromised. Firstly, we propose a novel method to search for an optimal micro-batch size considering the number of GPUs and memory size for model parallelism. For efficient pipelining, mini-batch is usually divided into smaller batches (called micro-batch). To maximize the utilization of GPU computing resources, training should be performed with the optimal micro-batch size. Our proposed micro-batch size search algorithm achieved increased image throughput by up to 12% and improved trainable mini-batch size by 25% as compared to the conventional model parallelism method. Secondly, we investigate normalizations in distributed deep learning training for different parallelisms. Our experiments using different normalization methods suggested that the performance with batch normalization can be improved by sharing the batch information among GPUs when performing data parallelism. It was also confirmed that group normalization helped minimizing accuracy degradation when performing model parallelism with pipelining and yielded consistent accuracies for diverse mini-batch sizes.


Asunto(s)
Aprendizaje Profundo , Algoritmos , Redes Neurales de la Computación
6.
IEEE Trans Med Imaging ; 42(10): 2961-2973, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37104110

RESUMEN

Accurate scatter estimation is important in quantitative SPECT for improving image contrast and accuracy. With a large number of photon histories, Monte-Carlo (MC) simulation can yield accurate scatter estimation, but is computationally expensive. Recent deep learning-based approaches can yield accurate scatter estimates quickly, yet full MC simulation is still required to generate scatter estimates as ground truth labels for all training data. Here we propose a physics-guided weakly supervised training framework for fast and accurate scatter estimation in quantitative SPECT by using a 100× shorter MC simulation as weak labels and enhancing them with deep neural networks. Our weakly supervised approach also allows quick fine-tuning of the trained network to any new test data for further improved performance with an additional short MC simulation (weak label) for patient-specific scatter modelling. Our method was trained with 18 XCAT phantoms with diverse anatomies / activities and then was evaluated on 6 XCAT phantoms, 4 realistic virtual patient phantoms, 1 torso phantom and 3 clinical scans from 2 patients for 177Lu SPECT with single / dual photopeaks (113, 208 keV). Our proposed weakly supervised method yielded comparable performance to the supervised counterpart in phantom experiments, but with significantly reduced computation in labeling. Our proposed method with patient-specific fine-tuning achieved more accurate scatter estimates than the supervised method in clinical scans. Our method with physics-guided weak supervision enables accurate deep scatter estimation in quantitative SPECT, while requiring much lower computation in labeling, enabling patient-specific fine-tuning capability in testing.


Asunto(s)
Redes Neurales de la Computación , Tomografía Computarizada de Emisión de Fotón Único , Humanos , Tomografía Computarizada de Emisión de Fotón Único/métodos , Simulación por Computador , Torso , Fantasmas de Imagen , Método de Montecarlo , Dispersión de Radiación , Procesamiento de Imagen Asistido por Computador/métodos
7.
Med Image Anal ; 89: 102886, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37494811

RESUMEN

Microsatellite instability (MSI) refers to alterations in the length of simple repetitive genomic sequences. MSI status serves as a prognostic and predictive factor in colorectal cancer. The MSI-high status is a good prognostic factor in stage II/III cancer, and predicts a lack of benefit to adjuvant fluorouracil chemotherapy in stage II cancer but a good response to immunotherapy in stage IV cancer. Therefore, determining MSI status in patients with colorectal cancer is important for identifying the appropriate treatment protocol. In the Pathology Artificial Intelligence Platform (PAIP) 2020 challenge, artificial intelligence researchers were invited to predict MSI status based on colorectal cancer slide images. Participants were required to perform two tasks. The primary task was to classify a given slide image as belonging to either the MSI-high or the microsatellite-stable group. The second task was tumor area segmentation to avoid ties with the main task. A total of 210 of the 495 participants enrolled in the challenge downloaded the images, and 23 teams submitted their final results. Seven teams from the top 10 participants agreed to disclose their algorithms, most of which were convolutional neural network-based deep learning models, such as EfficientNet and UNet. The top-ranked system achieved the highest F1 score (0.9231). This paper summarizes the various methods used in the PAIP 2020 challenge. This paper supports the effectiveness of digital pathology for identifying the relationship between colorectal cancer and the MSI characteristics.


Asunto(s)
Neoplasias Colorrectales , Inestabilidad de Microsatélites , Humanos , Inteligencia Artificial , Pronóstico , Fluorouracilo/uso terapéutico , Neoplasias Colorrectales/genética , Neoplasias Colorrectales/patología
8.
Med Image Anal ; 67: 101854, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33091742

RESUMEN

Pathology Artificial Intelligence Platform (PAIP) is a free research platform in support of pathological artificial intelligence (AI). The main goal of the platform is to construct a high-quality pathology learning data set that will allow greater accessibility. The PAIP Liver Cancer Segmentation Challenge, organized in conjunction with the Medical Image Computing and Computer Assisted Intervention Society (MICCAI 2019), is the first image analysis challenge to apply PAIP datasets. The goal of the challenge was to evaluate new and existing algorithms for automated detection of liver cancer in whole-slide images (WSIs). Additionally, the PAIP of this year attempted to address potential future problems of AI applicability in clinical settings. In the challenge, participants were asked to use analytical data and statistical metrics to evaluate the performance of automated algorithms in two different tasks. The participants were given the two different tasks: Task 1 involved investigating Liver Cancer Segmentation and Task 2 involved investigating Viable Tumor Burden Estimation. There was a strong correlation between high performance of teams on both tasks, in which teams that performed well on Task 1 also performed well on Task 2. After evaluation, we summarized the top 11 team's algorithms. We then gave pathological implications on the easily predicted images for cancer segmentation and the challenging images for viable tumor burden estimation. Out of the 231 participants of the PAIP challenge datasets, a total of 64 were submitted from 28 team participants. The submitted algorithms predicted the automatic segmentation on the liver cancer with WSIs to an accuracy of a score estimation of 0.78. The PAIP challenge was created in an effort to combat the lack of research that has been done to address Liver cancer using digital pathology. It remains unclear of how the applicability of AI algorithms created during the challenge can affect clinical diagnoses. However, the results of this dataset and evaluation metric provided has the potential to aid the development and benchmarking of cancer diagnosis and segmentation.


Asunto(s)
Inteligencia Artificial , Neoplasias Hepáticas , Algoritmos , Humanos , Procesamiento de Imagen Asistido por Computador , Neoplasias Hepáticas/diagnóstico por imagen , Carga Tumoral
9.
IEEE Trans Med Imaging ; 39(5): 1369-1379, 2020 05.
Artículo en Inglés | MEDLINE | ID: mdl-31647425

RESUMEN

Quantitative yttrium-90 (Y-90) SPECT imaging is challenging due to the nature of Y-90, an almost pure beta emitter that is associated with a continuous spectrum of bremsstrahlung photons that have a relatively low yield. This paper proposes joint spectral reconstruction (JSR), a novel bremsstrahlung SPECT reconstruction method that uses multiple narrow acquisition windows with accurate multi-band forward modeling to cover a wide range of the energy spectrum. Theoretical analyses using Fisher information and Monte-Carlo (MC) simulation with a digital phantom show that the proposed JSR model with multiple acquisition windows has better performance in terms of covariance (precision) than previous methods using multi-band forward modeling with a single acquisition window, or using a single-band forward modeling with a single acquisition window. We also propose an energy-window subset (ES) algorithm for JSR to achieve fast empirical convergence and maximum-likelihood based initialization for all reconstruction methods to improve quantification accuracy in early iterations. For both MC simulation with a digital phantom and experimental study with a physical multi-sphere phantom, our proposed JSR-ES, a fast algorithm for JSR with ES, yielded higher recovery coefficients (RCs) on hot spheres over all iterations and sphere sizes than all the other evaluated methods, due to fast empirical convergence. In experimental study, for the smallest hot sphere (diameter 1.6cm), at the 20th iteration the increase in RCs with JSR-ES was 66 and 31% compared with single wide and narrow band forward models, respectively. JSR-ES also yielded lower residual count error (RCE) on a cold sphere over all iterations than other methods for MC simulation with known scatter, but led to greater RCE compared with single narrow band forward model at higher iterations for experimental study when using estimated scatter.


Asunto(s)
Tomografía Computarizada de Emisión de Fotón Único , Radioisótopos de Itrio , Algoritmos , Procesamiento de Imagen Asistido por Computador , Funciones de Verosimilitud , Método de Montecarlo , Fantasmas de Imagen , Radioisótopos de Itrio/uso terapéutico
10.
IEEE Trans Image Process ; 26(4): 1637-1649, 2017 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-28129157

RESUMEN

A non-local means (NLM) filter is a weighted average of a large number of non-local pixels with various image intensity values. The NLM filters have been shown to have powerful denoising performance, excellent detail preservation by averaging many noisy pixels, and using appropriate values for the weights, respectively. The NLM weights between two different pixels are determined based on the similarities between two patches that surround these pixels and a smoothing parameter. Another important factor that influences the denoising performance is the self-weight values for the same pixel. The recently introduced local James-Stein type center pixel weight estimation method (LJS) outperforms other existing methods when determining the contribution of the center pixels in the NLM filter. However, the LJS method may result in excessively large self-weight estimates since no upper bound is assumed, and the method uses a relatively large local area for estimating the self-weights, which may lead to a strong bias. In this paper, we investigated these issues in the LJS method, and then propose a novel local self-weight estimation methods using direct bounds (LMM-DB) and reparametrization (LMM-RP) based on the Baranchik's minimax estimator. Both the LMM-DB and LMM-RP methods were evaluated using a wide range of natural images and a clinical MRI image together with the various levels of additive Gaussian noise. Our proposed parameter selection methods yielded an improved bias-variance trade-off, a higher peak signal-to-noise (PSNR) ratio, and fewer visual artifacts when compared with the results of the classical NLM and LJS methods. Our proposed methods also provide a heuristic way to select a suitable global smoothing parameters that can yield PSNR values that are close to the optimal values.

11.
Med Phys ; 44(12): 6364-6376, 2017 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-28940483

RESUMEN

PURPOSE: In 90 Y microsphere radioembolization (RE), accurate post-therapy imaging-based dosimetry is important for establishing absorbed dose versus outcome relationships for developing future treatment planning strategies. Additionally, accurately assessing microsphere distributions is important because of concerns for unexpected activity deposition outside the liver. Quantitative 90 Y imaging by either SPECT or PET is challenging. In 90 Y SPECT model based methods are necessary for scatter correction because energy window-based methods are not feasible with the continuous bremsstrahlung energy spectrum. The objective of this work was to implement and evaluate a scatter estimation method for accurate 90 Y bremsstrahlung SPECT/CT imaging. METHODS: Since a fully Monte Carlo (MC) approach to 90 Y SPECT reconstruction is computationally very demanding, in the present study the scatter estimate generated by a MC simulator was combined with an analytical projector in the 3D OS-EM reconstruction model. A single window (105 to 195-keV) was used for both the acquisition and the projector modeling. A liver/lung torso phantom with intrahepatic lesions and low-uptake extrahepatic objects was imaged to evaluate SPECT/CT reconstruction without and with scatter correction. Clinical application was demonstrated by applying the reconstruction approach to five patients treated with RE to determine lesion and normal liver activity concentrations using a (liver) relative calibration. RESULTS: There was convergence of the scatter estimate after just two updates, greatly reducing computational requirements. In the phantom study, compared with reconstruction without scatter correction, with MC scatter modeling there was substantial improvement in activity recovery in intrahepatic lesions (from > 55% to > 86%), normal liver (from 113% to 104%), and lungs (from 227% to 104%) with only a small degradation in noise (13% vs. 17%). Similarly, with scatter modeling contrast improved substantially both visually and in terms of a detectability index, which was especially relevant for the low uptake extrahepatic objects. The trends observed for the phantom were also seen in the patient studies where lesion activity concentrations and lesion-to-liver concentration ratios were lower for SPECT without scatter correction compared with reconstruction with just two MC scatter updates: in eleven lesions the mean uptake was 4.9 vs. 7.1 MBq/mL (P = 0.0547), the mean normal liver uptake was 1.6 vs. 1.5 MBq/mL (P = 0.056) and the mean lesion-to-liver uptake ratio was 2.7 vs. 4.3 (P = 0.0402) for reconstruction without and with scatter correction respectively. CONCLUSIONS: Quantitative accuracy of 90 Y bremsstrahlung imaging can be substantially improved with MC scatter modeling without significant degradation in image noise or intensive computational requirements.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Modelos Teóricos , Método de Montecarlo , Dispersión de Radiación , Tomografía Computarizada por Tomografía Computarizada de Emisión de Fotón Único , Radioisótopos de Itrio , Humanos , Fantasmas de Imagen , Fotones , Factores de Tiempo , Torso/diagnóstico por imagen
12.
Nucl Med Mol Imaging ; 50(1): 13-23, 2016 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-26941855

RESUMEN

PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples.

13.
Sci Rep ; 6: 32647, 2016 09 06.
Artículo en Inglés | MEDLINE | ID: mdl-27596274

RESUMEN

High field magnetic resonance imaging (MRI)-based delineation of the substantia nigra (SN) and visualization of its inner cellular organization are promising methods for the evaluation of morphological changes associated with neurodegenerative diseases; however, corresponding MR contrasts must be matched and validated with quantitative histological information. Slices from two postmortem SN samples were imaged with a 7 Tesla (7T) MRI with T1 and T2* imaging protocols and then stained with Perl's Prussian blue, Kluver-Barrera, tyrosine hydroxylase, and calbindin immunohistochemistry in a serial manner. The association between T2* values and quantitative histology was investigated with a co-registration method that accounts for histology slice preparation. The ventral T2* hypointense layers between the SNr and the crus cerebri extended anteriorly to the posterior part of the crus cerebri, which demonstrates the difficulty with an MRI-based delineation of the SN. We found that the paramagnetic hypointense areas within the dorsolateral SN corresponded to clusters of neuromelanin (NM). These NM-rich zones were distinct from the hypointense ventromedial regions with high iron pigments. Nigral T2* imaging at 7T can reflect the density of NM-containing neurons as the metal-bound NM macromolecules may decrease T2* values and cause hypointense signalling in T2* imaging at 7T.


Asunto(s)
Medios de Contraste/química , Imagen por Resonancia Magnética , Melaninas/metabolismo , Cambios Post Mortem , Sustancia Negra/metabolismo , Sustancia Negra/patología , Adulto , Femenino , Humanos , Hierro/metabolismo , Masculino , Persona de Mediana Edad
14.
IEEE Trans Med Imaging ; 33(10): 1960-8, 2014 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-25291351

RESUMEN

The ordered subset expectation maximization (OSEM) algorithm approximates the gradient of a likelihood function using a subset of projections instead of using all projections so that fast image reconstruction is possible for emission and transmission tomography such as SPECT, PET, and CT. However, OSEM does not significantly accelerate reconstruction with computationally expensive regularizers such as patch-based nonlocal (NL) regularizers, because the regularizer gradient is evaluated for every subset. We propose to use variable splitting to separate the likelihood term and the regularizer term for penalized emission tomographic image reconstruction problem and to optimize it using the alternating direction method of multiplier (ADMM). We also propose a fast algorithm to optimize the ADMM parameter based on convergence rate analysis. This new scheme enables more sub-iterations related to the likelihood term. We evaluated our ADMM for 3-D SPECT image reconstruction with a patch-based NL regularizer that uses the Fair potential function. Our proposed ADMM improved the speed of convergence substantially compared to other existing methods such as gradient descent, EM, and OSEM using De Pierro's approach, and the limited-memory Broyden-Fletcher-Goldfarb-Shanno algorithm.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada de Emisión/métodos , Simulación por Computador , Fantasmas de Imagen
15.
Med Phys ; 41(5): 051901, 2014 May.
Artículo en Inglés | MEDLINE | ID: mdl-24784380

RESUMEN

PURPOSE: To evaluate a three-equation three-unknown dual-energy quantitative CT (DEQCT) technique for determining region specific variations in bone spongiosa composition for improved red marrow dose estimation in radionuclide therapy. METHODS: The DEQCT method was applied to 80/140 kVp images of patient-simulating lumbar sectional body phantoms of three sizes (small, medium, and large). External calibration rods of bone, red marrow, and fat-simulating materials were placed beneath the body phantoms. Similar internal calibration inserts were placed at vertebral locations within the body phantoms. Six test inserts of known volume fractions of bone, fat, and red marrow were also scanned. External-to-internal calibration correction factors were derived. The effects of body phantom size, radiation dose, spongiosa region segmentation granularity [single (∼17 × 17 mm) region of interest (ROI), 2 × 2, and 3 × 3 segmentation of that single ROI], and calibration method on the accuracy of the calculated volume fractions of red marrow (cellularity) and trabecular bone were evaluated. RESULTS: For standard low dose DEQCT x-ray technique factors and the internal calibration method, the RMS errors of the estimated volume fractions of red marrow of the test inserts were 1.2-1.3 times greater in the medium body than in the small body phantom and 1.3-1.5 times greater in the large body than in the small body phantom. RMS errors of the calculated volume fractions of red marrow within 2 × 2 segmented subregions of the ROIs were 1.6-1.9 times greater than for no segmentation, and RMS errors for 3 × 3 segmented subregions were 2.3-2.7 times greater than those for no segmentation. Increasing the dose by a factor of 2 reduced the RMS errors of all constituent volume fractions by an average factor of 1.40 ± 0.29 for all segmentation schemes and body phantom sizes; increasing the dose by a factor of 4 reduced those RMS errors by an average factor of 1.71 ± 0.25. Results for external calibrations exhibited much larger RMS errors than size matched internal calibration. Use of an average body size external-to-internal calibration correction factor reduced the errors to closer to those for internal calibration. RMS errors of less than 30% or about 0.01 for the bone and 0.1 for the red marrow volume fractions would likely be satisfactory for human studies. Such accuracies were achieved for 3 × 3 segmentation of 5 mm slice images for: (a) internal calibration with 4 times dose for all size body phantoms, (b) internal calibration with 2 times dose for the small and medium size body phantoms, and (c) corrected external calibration with 4 times dose and all size body phantoms. CONCLUSIONS: Phantom studies are promising and demonstrate the potential to use dual energy quantitative CT to estimate the spatial distributions of red marrow and bone within the vertebral spongiosa.


Asunto(s)
Médula Ósea/efectos de la radiación , Huesos/efectos de la radiación , Radiometría/métodos , Radioterapia/métodos , Tomografía Computarizada por Rayos X/métodos , Tejido Adiposo/efectos de la radiación , Algoritmos , Tamaño Corporal/efectos de la radiación , Calibración , Humanos , Vértebras Lumbares , Modelos Biológicos , Dosis de Radiación , Asta Dorsal de la Médula Espinal/efectos de la radiación
16.
IEEE Trans Med Imaging ; 32(2): 141-52, 2013 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-22759442

RESUMEN

Motion-compensated image reconstruction (MCIR) methods incorporate motion models to improve image quality in the presence of motion. MCIR methods differ in terms of how they use motion information and they have been well studied separately. However, there have been less theoretical comparisions of different MCIR methods. This paper compares the theoretical noise properties of three popular MCIR methods assuming known nonrigid motion. We show the relationship among three MCIR methods-motion-compensated temporal regularization (MTR), the parametric motion model (PMM), and post-reconstruction motion correction (PMC)-for penalized weighted least square cases. These analyses show that PMM and MTR are matrix-weighted sums of all registered image frames, while PMC is a scalar-weighted sum. We further investigate the noise properties of MCIR methods with Poisson models and quadratic regularizers by deriving accurate and fast variance prediction formulas using an "analytical approach." These theoretical noise analyses show that the variances of PMM and MTR are lower than or comparable to the variance of PMC due to the statistical weighting. These analyses also facilitate comparisons of the noise properties of different MCIR methods, including the effects of different quadratic regularizers, the influence of the motion through its Jacobian determinant, and the effect of assuming that total activity is preserved. Two-dimensional positron emission tomography simulations demonstrate the theoretical results.


Asunto(s)
Algoritmos , Artefactos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Tomografía de Emisión de Positrones/métodos , Técnica de Sustracción , Movimiento (Física) , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Relación Señal-Ruido
17.
IEEE Trans Med Imaging ; 32(2): 295-305, 2013 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-23086521

RESUMEN

Compensating for the collimator-detector response (CDR) in SPECT is important for accurate quantification. The CDR consists of both a geometric response and a septal penetration and collimator scatter response. The geometric response can be modeled analytically and is often used for modeling the whole CDR if the geometric response dominates. However, for radionuclides that emit medium or high-energy photons such as I-131, the septal penetration and collimator scatter response is significant and its modeling in the CDR correction is important for accurate quantification. There are two main methods for modeling the depth-dependent CDR so as to include both the geometric response and the septal penetration and collimator scatter response. One is to fit a Gaussian plus exponential function that is rotationally invariant to the measured point source response at several source-detector distances. However, a rotationally-invariant exponential function cannot represent the star-shaped septal penetration tails in detail. Another is to perform Monte-Carlo (MC) simulations to generate the depth-dependent point spread functions (PSFs) for all necessary distances. However, MC simulations, which require careful modeling of the SPECT detector components, can be challenging and accurate results may not be available for all of the different SPECT scanners in clinics. In this paper, we propose an alternative approach to CDR modeling. We use a Gaussian function plus a 2-D B-spline PSF template and fit the model to measurements of an I-131 point source at several distances. The proposed PSF-template-based approach is nearly non-parametric, captures the characteristics of the septal penetration tails, and minimizes the difference between the fitted and measured CDR at the distances of interest. The new model is applied to I-131 SPECT reconstructions of experimental phantom measurements, a patient study, and a MC patient simulation study employing the XCAT phantom. The proposed model yields up to a 16.5 and 10.8% higher recovery coefficient compared to the results with the conventional Gaussian model and the Gaussian plus exponential model, respectively.


Asunto(s)
Algoritmos , Artefactos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Tomografía Computarizada de Emisión de Fotón Único/instrumentación , Transductores , Diseño de Equipo , Análisis de Falla de Equipo , Interpretación de Imagen Asistida por Computador/instrumentación , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
18.
Phys Med Biol ; 58(17): 6225-40, 2013 Sep 07.
Artículo en Inglés | MEDLINE | ID: mdl-23956327

RESUMEN

Quantitative SPECT techniques are important for many applications including internal emitter therapy dosimetry where accurate estimation of total target activity and activity distribution within targets are both potentially important for dose­response evaluations. We investigated non-local means (NLM) post-reconstruction filtering for accurate I-131 SPECT estimation of both total target activity and the 3D activity distribution. We first investigated activity estimation versus number of ordered-subsets expectation­maximization (OSEM) iterations. We performed simulations using the XCAT phantom with tumors containing a uniform and a non-uniform activity distribution, and measured the recovery coefficient (RC) and the root mean squared error (RMSE) to quantify total target activity and activity distribution, respectively. We observed that using more OSEM iterations is essential for accurate estimation of RC, but may or may not improve RMSE. We then investigated various post-reconstruction filtering methods to suppress noise at high iteration while preserving image details so that both RC and RMSE can be improved. Recently, NLM filtering methods have shown promising results for noise reduction. Moreover, NLM methods using high-quality side information can improve image quality further. We investigated several NLM methods with and without CT side information for I-131 SPECT imaging and compared them to conventional Gaussian filtering and to unfiltered methods. We studied four different ways of incorporating CT information in the NLM methods: two known (NLM CT-B and NLM CT-M) and two newly considered (NLM CT-S and NLM CT-H). We also evaluated the robustness of NLM filtering using CT information to erroneous CT. NLM CT-S and NLM CT-H yielded comparable RC values to unfiltered images while substantially reducing RMSE. NLM CT-S achieved −2.7 to 2.6% increase of RC compared to no filtering and NLM CT-H yielded up to 6% decrease in RC while other methods yielded lower RCs than them: Gaussian filtering (up to 11.8% decrease in RC), NLM method without CT (up to 9.5% decrease in RC), and NLM CT-M and NLM CT-B (up to 19.4% decrease in RC). NLM CT-S and NLM CT-H achieved 8.2 to 33.9% and −0.9 to 36% decreased RMSE on tumors compared to no filtering respectively while other methods yielded less reduced or increased RMSE: Gaussian filtering (up to 7.9% increase in RMSE), NLM method without CT (up to 18.3% increase in RMSE), and NLM CT-M and NLM CT-B (up to 31.5% increase in RMSE). NLM CT-S and NLM CT-H also yielded images with tumor shapes that better-matched the true shapes than other methods. All NLM methods using CT information were robust to small misregistration between SPECT and CT, but NLM CT-S and NLM CT-H were more sensitive than NLM CT-M and NLM CT-B to missing CT information.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada de Emisión de Fotón Único/métodos , Tomografía Computarizada por Rayos X/métodos , Humanos , Linfoma no Hodgkin/diagnóstico por imagen , Linfoma no Hodgkin/radioterapia , Fantasmas de Imagen
19.
IEEE Trans Med Imaging ; 31(7): 1413-25, 2012 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-22481813

RESUMEN

Many motion-compensated image reconstruction (MCIR) methods have been proposed to correct for subject motion in medical imaging. MCIR methods incorporate motion models to improve image quality by reducing motion artifacts and noise. This paper analyzes the spatial resolution properties of MCIR methods and shows that nonrigid local motion can lead to nonuniform and anisotropic spatial resolution for conventional quadratic regularizers. This undesirable property is akin to the known effects of interactions between heteroscedastic log-likelihoods (e.g., Poisson likelihood) and quadratic regularizers. This effect may lead to quantification errors in small or narrow structures (such as small lesions or rings) of reconstructed images. This paper proposes novel spatial regularization design methods for three different MCIR methods that account for known nonrigid motion. We develop MCIR regularization designs that provide approximately uniform and isotropic spatial resolution and that match a user-specified target spatial resolution. Two-dimensional PET simulations demonstrate the performance and benefits of the proposed spatial regularization design methods.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Tomografía de Emisión de Positrones/métodos , Artefactos , Simulación por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/instrumentación , Modelos Biológicos , Movimiento (Física) , Fantasmas de Imagen , Tomografía de Emisión de Positrones/instrumentación , Tomografía Computarizada por Rayos X/instrumentación , Tomografía Computarizada por Rayos X/métodos
20.
J Nucl Med ; 53(8): 1284-91, 2012 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-22743250

RESUMEN

UNLABELLED: Respiratory and cardiac motion is the most serious limitation to whole-body PET, resulting in spatial resolution close to 1 cm. Furthermore, motion-induced inconsistencies in the attenuation measurements often lead to significant artifacts in the reconstructed images. Gating can remove motion artifacts at the cost of increased noise. This paper presents an approach to respiratory motion correction using simultaneous PET/MRI to demonstrate initial results in phantoms, rabbits, and nonhuman primates and discusses the prospects for clinical application. METHODS: Studies with a deformable phantom, a free-breathing primate, and rabbits implanted with radioactive beads were performed with simultaneous PET/MRI. Motion fields were estimated from concurrently acquired tagged MR images using 2 B-spline nonrigid image registration methods and incorporated into a PET list-mode ordered-subsets expectation maximization algorithm. Using the measured motion fields to transform both the emission data and the attenuation data, we could use all the coincidence data to reconstruct any phase of the respiratory cycle. We compared the resulting SNR and the channelized Hotelling observer (CHO) detection signal-to-noise ratio (SNR) in the motion-corrected reconstruction with the results obtained from standard gating and uncorrected studies. RESULTS: Motion correction virtually eliminated motion blur without reducing SNR, yielding images with SNR comparable to those obtained by gating with 5-8 times longer acquisitions in all studies. The CHO study in dynamic phantoms demonstrated a significant improvement (166%-276%) in lesion detection SNR with MRI-based motion correction as compared with gating (P < 0.001). This improvement was 43%-92% for large motion compared with lesion detection without motion correction (P < 0.001). CHO SNR in the rabbit studies confirmed these results. CONCLUSION: Tagged MRI motion correction in simultaneous PET/MRI significantly improves lesion detection compared with respiratory gating and no motion correction while reducing radiation dose. In vivo primate and rabbit studies confirmed the improvement in PET image quality and provide the rationale for evaluation in simultaneous whole-body PET/MRI clinical studies.


Asunto(s)
Artefactos , Imagen por Resonancia Magnética/métodos , Movimiento , Tomografía de Emisión de Positrones/métodos , Animales , Macaca mulatta , Masculino , Fantasmas de Imagen , Conejos , Respiración , Factores de Tiempo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA