Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
1.
Eur J Nucl Med Mol Imaging ; 49(11): 3740-3749, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35507059

RESUMO

PURPOSE: To improve the quantitative accuracy and diagnostic confidence of PET images reconstructed without time-of-flight (ToF) using deep learning models trained for ToF image enhancement (DL-ToF). METHODS: A total of 273 [18F]-FDG PET scans were used, including data from 6 centres equipped with GE Discovery MI ToF scanners. PET data were reconstructed using the block-sequential-regularised-expectation-maximisation (BSREM) algorithm with and without ToF. The images were then split into training (n = 208), validation (n = 15), and testing (n = 50) sets. Three DL-ToF models were trained to transform non-ToF BSREM images to their target ToF images with different levels of DL-ToF strength (low, medium, high). The models were objectively evaluated using the testing set based on standardised uptake value (SUV) in 139 identified lesions, and in normal regions of liver and lungs. Three radiologists subjectively rated the models using testing sets based on lesion detectability, diagnostic confidence, and image noise/quality. RESULTS: The non-ToF, DL-ToF low, medium, and high methods resulted in - 28 ± 18, - 28 ± 19, - 8 ± 22, and 1.7 ± 24% differences (mean; SD) in the SUVmax for the lesions in testing set, compared to ToF-BSREM image. In background lung VOIs, the SUVmean differences were 7 ± 15, 0.6 ± 12, 1 ± 13, and 1 ± 11% respectively. In normal liver, SUVmean differences were 4 ± 5, 0.7 ± 4, 0.8 ± 4, and 0.1 ± 4%. Visual inspection showed that our DL-ToF improved feature sharpness and convergence towards ToF reconstruction. Blinded clinical readings of testing sets for diagnostic confidence (scale 0-5) showed that non-ToF, DL-ToF low, medium, and high, and ToF images scored 3.0, 3.0, 4.1, 3.8, and 3.5 respectively. For this set of images, DL-ToF medium therefore scored highest for diagnostic confidence. CONCLUSION: Deep learning-based image enhancement models may provide converged ToF-equivalent image quality without ToF reconstruction. In clinical scoring DL-ToF-enhanced non-ToF images (medium and high) on average scored as high as, or higher than, ToF images. The model is generalisable and hence, could be applied to non-ToF images from BGO-based PET/CT scanners.


Assuntos
Aprendizado Profundo , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Algoritmos , Fluordesoxiglucose F18 , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Tomografia por Emissão de Pósitrons/métodos , Tomografia Computadorizada por Raios X
2.
Eur J Nucl Med Mol Imaging ; 49(2): 539-549, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34318350

RESUMO

PURPOSE: To enhance the image quality of oncology [18F]-FDG PET scans acquired in shorter times and reconstructed by faster algorithms using deep neural networks. METHODS: List-mode data from 277 [18F]-FDG PET/CT scans, from six centres using GE Discovery PET/CT scanners, were split into ¾-, ½- and »-duration scans. Full-duration datasets were reconstructed using the convergent block sequential regularised expectation maximisation (BSREM) algorithm. Short-duration datasets were reconstructed with the faster OSEM algorithm. The 277 examinations were divided into training (n = 237), validation (n = 15) and testing (n = 25) sets. Three deep learning enhancement (DLE) models were trained to map full and partial-duration OSEM images into their target full-duration BSREM images. In addition to standardised uptake value (SUV) evaluations in lesions, liver and lungs, two experienced radiologists scored the quality of testing set images and BSREM in a blinded clinical reading (175 series). RESULTS: OSEM reconstructions demonstrated up to 22% difference in lesion SUVmax, for different scan durations, compared to full-duration BSREM. Application of the DLE models reduced this difference significantly for full-, ¾- and ½-duration scans, while simultaneously reducing the noise in the liver. The clinical reading showed that the standard DLE model with full- or ¾-duration scans provided an image quality substantially comparable to full-duration scans with BSREM reconstruction, yet in a shorter reconstruction time. CONCLUSION: Deep learning-based image enhancement models may allow a reduction in scan time (or injected activity) by up to 50%, and can decrease reconstruction time to a third, while maintaining image quality.


Assuntos
Fluordesoxiglucose F18 , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Tomografia por Emissão de Pósitrons/métodos , Tomografia Computadorizada por Raios X
3.
J Clin Densitom ; 22(3): 374-381, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30497869

RESUMO

INTRODUCTION: Bone mineral density (BMD) analysis by Dual-Energy x-ray Absorptiometry (DXA) can have some false negatives due to overlapping structures in the projections. Spectral Detector CT (SDCT) can overcome these limitations by providing volumetric information. We investigated its performance for BMD assessment and compared it to DXA and phantomless volumetric bone mineral density (PLvBMD), the latter known to systematically underestimate BMD. DXA is the current standard for BMD assessment, while PLvBMD is an established alternative for opportunistic BMD analysis using CT. Similarly to PLvBMD, spectral data could allow BMD screening opportunistically, without additional phantom calibration. METHODOLOGY: Ten concentrations of dipotassium phosphate (K2HPO4) ranging from 0 to 600 mg/ml, in an acrylic phantom were scanned using SDCT in four different, clinically-relevant scan conditions. Images were processed to estimate the K2HPO4 concentrations. A model representing a human lumbar spine (European Spine Phantom) was scanned and used for calibration via linear regression analysis. After calibration, our method was retrospectively applied to abdominal SDCT scans of 20 patients for BMD assessment, who also had PLvBMD and DXA. Performance of PLvBMD, DXA and our SDCT method were compared by sensitivity, specificity, negative predictive value and positive predictive value for decreased BMD. RESULTS: There was excellent correlation (R2 >0.99, p < 0.01) between true and measured K2HPO4 concentrations for all scan conditions. Overall mean measurement error ranged from -11.5 ± 4.7 mg/ml (-2.8 ± 6.0%) to -12.3 ± 6.3 mg/ml (-4.8 ± 3.0%) depending on scan conditions. Using DXA as a reference standard, sensitivity/specificity for detecting decreased BMD in the scanned patients were 100%/73% using SDCT, 100%/40% using PLvBMD provided T-scores, and 90-100%/40-53% using PLvBMD hydroxyapatite density classifications, respectively. CONCLUSIONS: Our results show excellent sensitivity and high specificity of SDCT for detecting decreased BMD, demonstrating clinical feasibility. Further validation in prospective clinical trials will be required.


Assuntos
Densidade Óssea , Vértebras Lombares/diagnóstico por imagem , Osteoporose/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Absorciometria de Fóton , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Vértebras Lombares/patologia , Masculino , Pessoa de Meia-Idade , Tamanho do Órgão , Osteoporose/patologia , Imagens de Fantasmas , Fosfatos , Compostos de Potássio
4.
Inf Sci (N Y) ; 422: 51-76, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-29628529

RESUMO

We introduce a new, semi-supervised classification method that extensively exploits knowledge. The method has three steps. First, the manifold regularization mechanism, adapted from the Laplacian support vector machine (LapSVM), is adopted to mine the manifold structure embedded in all training data, especially in numerous label-unknown data. Meanwhile, by converting the labels into pairwise constraints, the pairwise constraint regularization formula (PCRF) is designed to compensate for the few but valuable labelled data. Second, by further combining the PCRF with the manifold regularization, the precise manifold and pairwise constraint jointly regularized formula (MPCJRF) is achieved. Third, by incorporating the MPCJRF into the framework of the conventional SVM, our approach, referred to as semi-supervised classification with extensive knowledge exploitation (SSC-EKE), is developed. The significance of our research is fourfold: 1) The MPCJRF is an underlying adjustment, with respect to the pairwise constraints, to the graph Laplacian enlisted for approximating the potential data manifold. This type of adjustment plays the correction role, as an unbiased estimation of the data manifold is difficult to obtain, whereas the pairwise constraints, converted from the given labels, have an overall high confidence level. 2) By transforming the values of the two terms in the MPCJRF such that they have the same range, with a trade-off factor varying within the invariant interval [0, 1), the appropriate impact of the pairwise constraints to the graph Laplacian can be self-adaptively determined. 3) The implication regarding extensive knowledge exploitation is embodied in SSC-EKE. That is, the labelled examples are used not only to control the empirical risk but also to constitute the MPCJRF. Moreover, all data, both labelled and unlabelled, are recruited for the model smoothness and manifold regularization. 4) The complete framework of SSC-EKE organically incorporates multiple theories, such as joint manifold and pairwise constraint-based regularization, smoothness in the reproducing kernel Hilbert space, empirical risk minimization, and spectral methods, which facilitates the preferable classification accuracy as well as the generalizability of SSC-EKE.

5.
Knowl Based Syst ; 130: 33-50, 2017 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-30050232

RESUMO

We study a novel fuzzy clustering method to improve the segmentation performance on the target texture image by leveraging the knowledge from a prior texture image. Two knowledge transfer mechanisms, i.e. knowledge-leveraged prototype transfer (KL-PT) and knowledge-leveraged prototype matching (KL-PM) are first introduced as the bases. Applying them, the knowledge-leveraged transfer fuzzy C-means (KL-TFCM) method and its three-stage-interlinked framework, including knowledge extraction, knowledge matching, and knowledge utilization, are developed. There are two specific versions: KL-TFCM-c and KL-TFCM-f, i.e. the so-called crisp and flexible forms, which use the strategies of maximum matching degree and weighted sum, respectively. The significance of our work is fourfold: 1) Owing to the adjustability of referable degree between the source and target domains, KL-PT is capable of appropriately learning the insightful knowledge, i.e. the cluster prototypes, from the source domain; 2) KL-PM is able to self-adaptively determine the reasonable pairwise relationships of cluster prototypes between the source and target domains, even if the numbers of clusters differ in the two domains; 3) The joint action of KL-PM and KL-PT can effectively resolve the data inconsistency and heterogeneity between the source and target domains, e.g. the data distribution diversity and cluster number difference. Thus, using the three-stage-based knowledge transfer, the beneficial knowledge from the source domain can be extensively, self-adaptively leveraged in the target domain. As evidence of this, both KL-TFCM-c and KL-TFCM-f surpass many existing clustering methods in texture image segmentation; and 4) In the case of different cluster numbers between the source and target domains, KL-TFCM-f proves higher clustering effectiveness and segmentation performance than does KL-TFCM-c.

6.
Pattern Recognit ; 50: 155-177, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-27275022

RESUMO

Conventional, soft-partition clustering approaches, such as fuzzy c-means (FCM), maximum entropy clustering (MEC) and fuzzy clustering by quadratic regularization (FC-QR), are usually incompetent in those situations where the data are quite insufficient or much polluted by underlying noise or outliers. In order to address this challenge, the quadratic weights and Gini-Simpson diversity based fuzzy clustering model (QWGSD-FC), is first proposed as a basis of our work. Based on QWGSD-FC and inspired by transfer learning, two types of cross-domain, soft-partition clustering frameworks and their corresponding algorithms, referred to as type-I/type-II knowledge-transfer-oriented c-means (TI-KT-CM and TII-KT-CM), are subsequently presented, respectively. The primary contributions of our work are four-fold: (1) The delicate QWGSD-FC model inherits the most merits of FCM, MEC and FC-QR. With the weight factors in the form of quadratic memberships, similar to FCM, it can more effectively calculate the total intra-cluster deviation than the linear form recruited in MEC and FC-QR. Meanwhile, via Gini-Simpson diversity index, like Shannon entropy in MEC, and equivalent to the quadratic regularization in FC-QR, QWGSD-FC is prone to achieving the unbiased probability assignments, (2) owing to the reference knowledge from the source domain, both TI-KT-CM and TII-KT-CM demonstrate high clustering effectiveness as well as strong parameter robustness in the target domain, (3) TI-KT-CM refers merely to the historical cluster centroids, whereas TII-KT-CM simultaneously uses the historical cluster centroids and their associated fuzzy memberships as the reference. This indicates that TII-KT-CM features more comprehensive knowledge learning capability than TI-KT-CM and TII-KT-CM consequently exhibits more perfect cross-domain clustering performance and (4) neither the historical cluster centroids nor the historical cluster centroid based fuzzy memberships involved in TI-KT-CM or TII-KT-CM can be inversely mapped into the raw data. This means that both TI-KT-CM and TII-KT-CM can work without disclosing the original data in the source domain, i.e. they are of good privacy protection for the source domain. In addition, the convergence analyses regarding both TI-KT-CM and TII-KT-CM are conducted in our research. The experimental studies thoroughly evaluated and demonstrated our contributions on both synthetic and real-life data scenarios.

7.
EJNMMI Phys ; 11(1): 42, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38691232

RESUMO

BACKGROUND: Respiratory motion artefacts are a pitfall in thoracic PET/CT imaging. A source of these motion artefacts within PET images is the CT used for attenuation correction of the images. The arbitrary respiratory phase in which the helical CT ( CT helical ) is acquired often causes misregistration between PET and CT images, leading to inaccurate attenuation correction of the PET image. As a result, errors in tumour delineation or lesion uptake values can occur. To minimise the effect of motion in PET/CT imaging, a data-driven gating (DDG)-based motion match (MM) algorithm has been developed that estimates the phase of the CT helical , and subsequently warps this CT to a given phase of the respiratory cycle, allowing it to be phase-matched to the PET. A set of data was used which had four-dimensional CT (4DCT) acquired alongside PET/CT. The 4DCT allowed ground truth CT phases to be generated and compared to the algorithm-generated motion match CT (MMCT). Measurements of liver and lesion margin positions were taken across CT images to determine any differences and establish how well the algorithm performed concerning warping the CT helical to a given phase (end-of-expiration, EE). RESULTS: Whilst there was a minor significance in the liver measurement between the 4DCT and MMCT ( p = 0.045 ), no significant differences were found between the 4DCT or MMCT for lesion measurements ( p = 1.0 ). In all instances, the CT helical was found to be significantly different from the 4DCT ( p < 0.001 ). Consequently, the 4DCT and MMCT can be considered equivalent with respect to warped CT generation, showing the DDG-based MM algorithm to be successful. CONCLUSION: The MM algorithm successfully enables the phase-matching of a CT helical to the EE of a ground truth 4DCT. This would reduce the motion artefacts caused by PET/CT registration without requiring additional patient dose (required for a 4DCT).

8.
Phys Med Biol ; 69(16)2024 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-39009009

RESUMO

Objective. We introduce a versatile methodology for the accurate modelling of PET imaging systems via Monte Carlo simulations, using the Geant4 application for tomographic emission (GATE) platform. Accurate Monte Carlo modelling involves the incorporation of a complete analytical signal processing chain, called the digitizer in GATE, to emulate the different count rates encountered in actual positron emission tomography (PET) systems.Approach. The proposed approach consists of two steps: (1) modelling the digitizer to replicate the detection chain of real systems, covering all available parameters, whether publicly accessible or supplied by manufacturers; (2) estimating the remaining parameters, i.e. background noise level, detection efficiency, and pile-up, using optimisation techniques based on experimental single and prompt event rates. We show that this two-step optimisation reproduces the other experimental count rates (true, scatter, and random), without the need for additional adjustments. This method has been applied and validated with experimental data derived from the NEMA count losses test for three state-of-the-art SiPM-based time-of-flight (TOF)-PET systems: Philips Vereos, Siemens Biograph Vision 600 and GE Discovery MI 4-ring.Main results. The results show good agreement between experiments and simulations for the three PET systems, with absolute relative discrepancies below 3%, 6%, 6%, 7% and 12% for prompt, random, true, scatter and noise equivalent count rates, respectively, within the 0-10 kBq·ml-1activity concentration range typically observed in whole-body18F scans.Significance. Overall, the proposed digitizer optimisation method was shown to be effective in reproducing count rates and NECR for three of the latest generation SiPM-based TOF-PET imaging systems. The proposed methodology could be applied to other PET scanners.


Assuntos
Processamento de Imagem Assistida por Computador , Método de Monte Carlo , Tomografia por Emissão de Pósitrons , Tomografia por Emissão de Pósitrons/métodos , Processamento de Imagem Assistida por Computador/métodos
9.
EJNMMI Phys ; 11(1): 13, 2024 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-38294624

RESUMO

BACKGROUND: We propose a comprehensive evaluation of a Discovery MI 4-ring (DMI) model, using a Monte Carlo simulator (GATE) and a clinical reconstruction software package (PET toolbox). The following performance characteristics were compared with actual measurements according to NEMA NU 2-2018 guidelines: system sensitivity, count losses and scatter fraction (SF), coincidence time resolution (CTR), spatial resolution (SR), and image quality (IQ). For SR and IQ tests, reconstruction of time-of-flight (TOF) simulated data was performed using the manufacturer's reconstruction software. RESULTS: Simulated prompt, random, true, scatter and noise equivalent count rates closely matched the experimental rates with maximum relative differences of 1.6%, 5.3%, 7.8%, 6.6%, and 16.5%, respectively, in a clinical range of less than 10 kBq/mL. A 3.6% maximum relative difference was found between experimental and simulated sensitivities. The simulated spatial resolution was better than the experimental one. Simulated image quality metrics were relatively close to the experimental results. CONCLUSIONS: The current model is able to reproduce the behaviour of the DMI count rates in the clinical range and generate clinical-like images with a reasonable match in terms of contrast and noise.

10.
Phys Med Biol ; 2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-38959903

RESUMO

Respiratory motion correction is beneficial in PET, as it can reduce artefacts caused by motion and improve quantitative accuracy. Methods of motion correction are commonly based on a respiratory trace obtained through an external device (like the Real Time Position Management System) or a data driven method, such as those based on dimensionality reduction techniques (for instance PCA). PCA itself being a linear transformation to the axis of greatest variation. Data driven methods have the advantage of being non-invasive, and can be performed post-acquisition. However, their main downside being that they are adversely affected by the tracer kinetics of the dynamic PET acquisition. Therefore, they are mostly limited to static PET acquisitions. This work seeks to extend on existing PCA-based data-driven motion correction methods, to allow for their applicability to dynamic PET imaging. The methods explored in this work include; a moving window approach (similar to the Kinetic Respiratory Gating method from Schleyer et al.), extrapolation of the principal component from later time points to earlier time points, and a method to score, select, and combine multiple respiratory components. The resulting respiratory traces were evaluated on 22 data sets from a dynamic 18FFDG study on patients with Idiopathic Pulmonary Fibrosis. This was achieved by calculating their correlation with a surrogate signal acquired using a Real Time Position Management System. The results indicate that all methods produce better surrogate signals than when applying conventional PCA to dynamic data (for instance, a higher correlation with a gold standard respiratory trace). Extrapolating a late time point principal component produced more promising results than using a moving window. Scoring, selecting, and combining components held benefits over all other methods. This work allows for the extraction of a surrogate signal from dynamic PET data earlier in the acquisition and with a greater accuracy than previous work. This potentially allows for numerous other methods (for instance, respiratory motion correction) to be applied to this data (when they otherwise could not be previously used).

11.
Neuroimage ; 63(3): 1273-84, 2012 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-22892332

RESUMO

Positron emission tomography (PET) can be used to quantify physiological parameters. However to perform quantification requires that an input function is measured, namely a plasma time activity curve (TAC). Image-derived input functions (IDIFs) are attractive because they are noninvasive and nearly no blood loss is involved. However, the spatial resolution and the signal to noise ratio (SNR) of PET images are low, which degrades the accuracy of IDIFs. The objective of this study was to extract accurate input functions from microPET images with zero or one plasma sample using wavelet packet based sub-band decomposition independent component analysis (WP SDICA). Two approaches were used in this study. The first was the use of simulated dynamic rat images with different spatial resolutions and SNRs, and the second was the use of dynamic images of eight Sprague-Dawley rats. We also used a population-based input function and a fuzzy c-means clustering approach and compared their results with those obtained by our method using normalized root mean square errors, area under curve errors, and correlation coefficients. Our results showed that the accuracy of the one-sample WP SDICA approach was better than the other approaches using both simulated and realistic comparisons. The errors in the metabolic rate, as estimated by one-sample WP SDICA, were also the smallest using our approach.


Assuntos
Algoritmos , Sangue/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Animais , Análise por Conglomerados , Fluordesoxiglucose F18 , Lógica Fuzzy , Masculino , Redes Neurais de Computação , Compostos Radiofarmacêuticos , Ratos , Ratos Sprague-Dawley
12.
IEEE Trans Med Imaging ; 39(4): 819-832, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31425065

RESUMO

We propose a new method for generating synthetic CT images from modified Dixon (mDixon) MR data. The synthetic CT is used for attenuation correction (AC) when reconstructing PET data on abdomen and pelvis. While MR does not intrinsically contain any information about photon attenuation, AC is needed in PET/MR systems in order to be quantitatively accurate and to meet qualification standards required for use in many multi-center trials. Existing MR-based synthetic CT generation methods either use advanced MR sequences that have long acquisition time and limited clinical availability or use matching of the MR images from a newly scanned subject to images in a library of MR-CT pairs which has difficulty in accounting for the diversity of human anatomy especially in patients that have pathologies. To address these deficiencies, we present a five-phase interlinked method that uses mDixon MR acquisition and advanced machine learning methods for synthetic CT generation. Both transfer fuzzy clustering and active learning-based classification (TFC-ALC) are used. The significance of our efforts is fourfold: 1) TFC-ALC is capable of better synthetic CT generation than methods currently in use on the challenging abdomen using only common Dixon-based scanning. 2) TFC partitions MR voxels initially into the four groups regarding fat, bone, air, and soft tissue via transfer learning; ALC can learn insightful classifiers, using as few but informative labeled examples as possible to precisely distinguish bone, air, and soft tissue. Combining them, the TFC-ALC method successfully overcomes the inherent imperfection and potential uncertainty regarding the co-registration between CT and MR images. 3) Compared with existing methods, TFC-ALC features not only preferable synthetic CT generation but also improved parameter robustness, which facilitates its clinical practicability. Applying the proposed approach on mDixon-MR data from ten subjects, the average score of the mean absolute prediction deviation (MAPD) was 89.78±8.76 which is significantly better than the 133.17±9.67 obtained using the all-water (AW) method (p=4.11E-9) and the 104.97±10.03 obtained using the four-cluster-partitioning (FCP, i.e., external-air, internal-air, fat, and soft tissue) method (p=0.002). 4) Experiments in the PET SUV errors of these approaches show that TFC-ALC achieves the highest SUV accuracy and can generally reduce the SUV errors to 5% or less. These experimental results distinctively demonstrate the effectiveness of our proposed TFCALC method for the synthetic CT generation on abdomen and pelvis using only the commonly-available Dixon pulse sequence.


Assuntos
Abdome/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Pelve/diagnóstico por imagem , Tomografia por Emissão de Pósitrons/métodos , Máquina de Vetores de Suporte , Análise por Conglomerados , Lógica Fuzzy , Humanos , Imageamento por Ressonância Magnética , Tomografia Computadorizada por Raios X
13.
Phys Med Biol ; 54(6): 1823-46, 2009 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-19258684

RESUMO

Medical images usually suffer from a partial volume effect (PVE), which may degrade the accuracy of any quantitative information extracted from the images. Our aim was to recreate accurate radioactivity concentration and time-activity curves (TACs) by microPET R4 quantification using ensemble learning independent component analysis (EL-ICA). We designed a digital cardiac phantom for this simulation and in order to evaluate the ability of EL-ICA to correct the PVE, the simulated images were convoluted using a Gaussian function (FWHM = 1-4 mm). The robustness of the proposed method towards noise was investigated by adding statistical noise (SNR = 2-16). During further evaluation, another set of cardiac phantoms were generated from the reconstructed images, and Poisson noise at different levels was added to the sinogram. In real experiments, four rat microPET images and a number of arterial blood samples were obtained; these were used to estimate the metabolic rate of FDG (MR(FDG)). Input functions estimated using the FastICA method were used for comparison. The results showed that EL-ICA could correct PVE in both the simulated and real cases. After correcting for the PVE, the errors for MR(FDG), when estimated by the EL-ICA method, were smaller than those when TACs were directly derived from the PET images and when the FastICA approach was used.


Assuntos
Sangue , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Animais , Artefatos , Coração/diagnóstico por imagem , Imagens de Fantasmas , Ratos , Sensibilidade e Especificidade
14.
Med Phys ; 46(8): 3520-3531, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-31063248

RESUMO

PURPOSE: Accurate photon attenuation assessment from MR data remains an unmet challenge in the thorax due to tissue heterogeneity and the difficulty of MR lung imaging. As thoracic tissues encompass the whole physiologic range of photon absorption, large errors can occur when using, for example, a uniform, water-equivalent or a soft-tissue-only approximation. The purpose of this study was to introduce a method for voxel-wise thoracic synthetic CT (sCT) generation from MR data attenuation correction (AC) for PET/MR or for MR-only radiation treatment planning (RTP). METHODS: Acquisition: A radial stack-of-stars combining ultra-short-echo time (UTE) and modified Dixon (mDixon) sequence was optimized for thoracic imaging. The UTE-mDixon pulse sequence collects MR signals at three TE times denoted as UTE, Echo1, and Echo2. Three-point mDixon processing was used to reconstruct water and fat images. Bias field correction was applied in order to avoid artifacts caused by inhomogeneity of the MR magnetic field. ANALYSIS: Water fraction and R2* maps were estimated using the UTE-mDixon data to produce a total of seven MR features, that is UTE, Echo1, Echo2, Dixon water, Dixon fat, Water fraction, and R2*. A feature selection process was performed to determine the optimal feature combination for the proposed automatic, 6-tissue classification for sCT generation. Fuzzy c-means was used for the automatic classification which was followed by voxel-wise attenuation coefficient assignment as a weighted sum of those of the component tissues. Performance evaluation: MR data collected using the proposed pulse sequence were compared to those using a traditional two-point Dixon approach. Image quality measures, including image resolution and uniformity, were evaluated using an MR ACR phantom. Data collected from 25 normal volunteers were used to evaluate the accuracy of the proposed method compared to the template-based approach. Notably, the template approach is applicable here, that is normal volunteers, but may not be robust enough for patients with pathologies. RESULTS: The free breathing UTE-mDixon pulse sequence yielded images with quality comparable to those using the traditional breath holding mDixon sequence. Furthermore, by capturing the signal before T2* decay, the UTE-mDixon image provided lung and bone information which the mDixon image did not. The combination of Dixon water, Dixon fat, and the Water fraction was the most robust for tissue clustering and supported the classification of six tissues, that is, air, lung, fat, soft tissue, low-density bone, and dense bone, used to generate the sCT. The thoracic sCT had a mean absolute difference from the template-based (reference) CT of less than 50 HU and which was better agreement with the reference CT than the results produced using the traditional Dixon-based data. CONCLUSION: MR thoracic acquisition and analyses have been established to automatically provide six distinguishable tissue types to generate sCT for MR-based AC of PET/MR and for MR-only RTP.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Tórax/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Análise por Conglomerados , Humanos
15.
Comput Methods Programs Biomed ; 92(3): 289-93, 2008 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-18508153

RESUMO

A beam-hardening effect is a common problem affecting the quantitative aspects of X-ray computed tomography (CT). We have developed two statistical reconstruction algorithms for poly-energetic X-ray CT that can effectively reduce the beam-hardening effect. Phantom tests were used to evaluate our approach in comparison with traditional correction methods. Unlike previous methods, our algorithm utilizes multiple energy-corresponding blank scans to estimate the attenuation map for a particular energy spectrum. Therefore, our algorithm is an energy-selective reconstruction. In addition to benefits over other statistical algorithms for poly-energetic reconstruction, our algorithm has the advantage of not requiring prior knowledge of the object material, the energy spectrum of the source and the energy sensitivity of the detector. The results showed an improvement in coefficient of variation, uniformity and signal-to-noise ratio; overall, this novel approach produces a better beam-hardening correction.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Tomografia Computadorizada por Raios X/métodos , Humanos
16.
Comput Methods Programs Biomed ; 92(3): 299-304, 2008 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-18423926

RESUMO

The term input function usually refers to the tracer plasma time activity curve (pTAC), which is necessary for quantitative positron emission tomography (PET) studies. The purpose of this study was to acquire the pTAC by independent component analysis (ICA) estimation from the whole blood time activity curve (wTAC) using a novel method, namely the FDG blood-cell-two-compartment model (BCM). This approach was compared to a number of published models, including linear haematocrit (HCT) correction, non-linear HCT correction and two-exponential correction. The results of this study show that the normalized root mean square error (NRMSE) and the error of the area under curve (EAUC) for the BCM estimate of the pTAC were the smallest. Compartmental and graphic analyses were used to estimate the metabolic rate of the FDG (MR(FDG)). The percentage error for the MR(FDG) (PE(MRFDG)) was estimated from the BCM corrected pTAC and this was also the smallest. It is concluded that the BCM is a better choice when transferring wTAC into pTAC for quantification.


Assuntos
Modelos Estatísticos , Roedores/sangue , Animais , Área Sob a Curva , Interpretação de Imagem Assistida por Computador/métodos , Taxa de Depuração Metabólica , Tomografia por Emissão de Pósitrons , Análise de Componente Principal/métodos
17.
IEEE Access ; 6: 28594-28610, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-31289704

RESUMO

As a dedicated countermeasure for heterogeneous multi-view data, multi-view clustering is currently a hot topic in machine learning. However, many existing methods either neglect the effective collaborations among views during clustering or do not distinguish the respective importance of attributes in views, instead treating them equivalently. Motivated by such challenges, based on maximum entropy clustering (MEC), two specialized criteria-inter-view collaborative learning (IEVCL) and intra-view-weighted attributes (IAVWA)-are first devised as the bases. Then, by organically incorporating IEVCL and IAVWA into the formulation of classic MEC, a novel, collaborative multi-view clustering model and the matching algorithm referred to as the view-collaborative, attribute-weighted MEC (VC-AW-MEC) are proposed. The significance of our efforts is three-fold: 1) both IEVCL and IAVWA are dedicatedly devised based on MEC so that the proposed VC-AW-MEC is qualified to effectively handle as many multi-view data scenes as possible; 2) IEVCL is competent in seeking the consensus across all involved views throughout clustering, whereas IAVWA is capable of adaptively discriminating the individual impact regarding the attributes within each view; and 3) benefiting from jointly leveraging IEVCL and IAVWA, compared with some existing state-of-the-art approaches, the proposed VC-AW-MEC algorithm generally exhibits preferable clustering effectiveness and stability on heterogeneous multi-view data. Our efforts have been verified in many synthetic or real-world multi-view data scenes.

18.
Artif Intell Med ; 90: 34-41, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-30054121

RESUMO

BACKGROUND: Manual contouring remains the most laborious task in radiation therapy planning and is a major barrier to implementing routine Magnetic Resonance Imaging (MRI) Guided Adaptive Radiation Therapy (MR-ART). To address this, we propose a new artificial intelligence-based, auto-contouring method for abdominal MR-ART modeled after human brain cognition for manual contouring. METHODS/MATERIALS: Our algorithm is based on two types of information flow, i.e. top-down and bottom-up. Top-down information is derived from simulation MR images. It grossly delineates the object based on its high-level information class by transferring the initial planning contours onto daily images. Bottom-up information is derived from pixel data by a supervised, self-adaptive, active learning based support vector machine. It uses low-level pixel features, such as intensity and location, to distinguish each target boundary from the background. The final result is obtained by fusing top-down and bottom-up outputs in a unified framework through artificial intelligence fusion. For evaluation, we used a dataset of four patients with locally advanced pancreatic cancer treated with MR-ART using a clinical system (MRIdian, Viewray, Oakwood Village, OH, USA). Each set included the simulation MRI and onboard T1 MRI corresponding to a randomly selected treatment session. Each MRI had 144 axial slices of 266 × 266 pixels. Using the Dice Similarity Index (DSI) and the Hausdorff Distance Index (HDI), we compared the manual and automated contours for the liver, left and right kidneys, and the spinal cord. RESULTS: The average auto-segmentation time was two minutes per set. Visually, the automatic and manual contours were similar. Fused results achieved better accuracy than either the bottom-up or top-down method alone. The DSI values were above 0.86. The spinal canal contours yielded a low HDI value. CONCLUSION: With a DSI significantly higher than the usually reported 0.7, our novel algorithm yields a high segmentation accuracy. To our knowledge, this is the first fully automated contouring approach using T1 MRI images for adaptive radiotherapy.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Neoplasias Pancreáticas/radioterapia , Planejamento da Radioterapia Assistida por Computador/métodos , Radioterapia Guiada por Imagem/métodos , Máquina de Vetores de Suporte , Humanos , Imagem Multimodal , Neoplasias Pancreáticas/diagnóstico por imagem , Neoplasias Pancreáticas/patologia , Tomografia Computadorizada por Raios X , Fluxo de Trabalho
19.
Phys Med Biol ; 63(12): 125001, 2018 06 08.
Artigo em Inglês | MEDLINE | ID: mdl-29787382

RESUMO

The aim is to develop and evaluate machine learning methods for generating quantitative parametric maps of effective atomic number (Zeff), relative electron density (ρ e), mean excitation energy (I x ), and relative stopping power (RSP) from clinical dual-energy CT data. The maps could be used for material identification and radiation dose calculation. Machine learning methods of historical centroid (HC), random forest (RF), and artificial neural networks (ANN) were used to learn the relationship between dual-energy CT input data and ideal output parametric maps calculated for phantoms from the known compositions of 13 tissue substitutes. After training and model selection steps, the machine learning predictors were used to generate parametric maps from independent phantom and patient input data. Precision and accuracy were evaluated using the ideal maps. This process was repeated for a range of exposure doses, and performance was compared to that of the clinically-used dual-energy, physics-based method which served as the reference. The machine learning methods generated more accurate and precise parametric maps than those obtained using the reference method. Their performance advantage was particularly evident when using data from the lowest exposure, one-fifth of a typical clinical abdomen CT acquisition. The RF method achieved the greatest accuracy. In comparison, the ANN method was only 1% less accurate but had much better computational efficiency than RF, being able to produce parametric maps in 15 s. Machine learning methods outperformed the reference method in terms of accuracy and noise tolerance when generating parametric maps, encouraging further exploration of the techniques. Among the methods we evaluated, ANN is the most suitable for clinical use due to its combination of accuracy, excellent low-noise performance, and computational efficiency.


Assuntos
Aprendizado de Máquina , Tomografia Computadorizada por Raios X/métodos , Humanos , Imagens de Fantasmas
20.
IEEE Trans Neural Netw Learn Syst ; 28(5): 1123-1138, 2017 05.
Artigo em Inglês | MEDLINE | ID: mdl-26915134

RESUMO

The existing, semisupervised, spectral clustering approaches have two major drawbacks, i.e., either they cannot cope with multiple categories of supervision or they sometimes exhibit unstable effectiveness. To address these issues, two normalized affinity and penalty jointly constrained spectral clustering frameworks as well as their corresponding algorithms, referred to as type-I affinity and penalty jointly constrained spectral clustering (TI-APJCSC) and type-II affinity and penalty jointly constrained spectral clustering (TII-APJCSC), respectively, are proposed in this paper. TI refers to type-I and TII to type-II. The significance of this paper is fourfold. First, benefiting from the distinctive affinity and penalty jointly constrained strategies, both TI-APJCSC and TII-APJCSC are substantially more effective than the existing methods. Second, both TI-APJCSC and TII-APJCSC are fully compatible with the three well-known categories of supervision, i.e., class labels, pairwise constraints, and grouping information. Third, owing to the delicate framework normalization, both TI-APJCSC and TII-APJCSC are quite flexible. With a simple tradeoff factor varying in the small fixed interval (0, 1], they can self-adapt to any semisupervised scenario. Finally, both TI-APJCSC and TII-APJCSC demonstrate strong robustness, not only to the number of pairwise constraints but also to the parameter for affinity measurement. As such, the novel TI-APJCSC and TII-APJCSC algorithms are very practical for medium- and small-scale semisupervised data sets. The experimental studies thoroughly evaluated and demonstrated these advantages on both synthetic and real-life semisupervised data sets.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa