RESUMO
Magnetic Resonance Spectroscopic Imaging (MRSI) is a non-invasive imaging technique for studying metabolism and has become a crucial tool for understanding neurological diseases, cancers and diabetes. High spatial resolution MRSI is needed to characterize lesions, but in practice MRSI is acquired at low resolution due to time and sensitivity restrictions caused by the low metabolite concentrations. Therefore, there is an imperative need for a post-processing approach to generate high-resolution MRSI from low-resolution data that can be acquired fast and with high sensitivity. Deep learning-based super-resolution methods provided promising results for improving the spatial resolution of MRSI, but they still have limited capability to generate accurate and high-quality images. Recently, diffusion models have demonstrated superior learning capability than other generative models in various tasks, but sampling from diffusion models requires iterating through a large number of diffusion steps, which is time-consuming. This work introduces a Flow-based Truncated Denoising Diffusion Model (FTDDM) for super-resolution MRSI, which shortens the diffusion process by truncating the diffusion chain, and the truncated steps are estimated using a normalizing flow-based network. The network is conditioned on upscaling factors to enable multi-scale super-resolution. To train and evaluate the deep learning models, we developed a 1H-MRSI dataset acquired from 25 high-grade glioma patients. We demonstrate that FTDDM outperforms existing generative models while speeding up the sampling process by over 9-fold compared to the baseline diffusion model. Neuroradiologists' evaluations confirmed the clinical advantages of our method, which also supports uncertainty estimation and sharpness adjustment, extending its potential clinical applications.
RESUMO
BACKGROUND: Accurate mortality risk quantification is crucial for the management of hepatocellular carcinoma (HCC); however, most scoring systems are subjective. PURPOSE: To develop and independently validate a machine learning mortality risk quantification method for HCC patients using standard-of-care clinical data and liver radiomics on baseline magnetic resonance imaging (MRI). METHODS: This retrospective study included all patients with multiphasic contrast-enhanced MRI at the time of diagnosis treated at our institution. Patients were censored at their last date of follow-up, end-of-observation, or liver transplantation date. The data were randomly sampled into independent cohorts, with 85% for development and 15% for independent validation. An automated liver segmentation framework was adopted for radiomic feature extraction. A random survival forest combined clinical and radiomic variables to predict overall survival (OS), and performance was evaluated using Harrell's C-index. RESULTS: A total of 555 treatment-naïve HCC patients (mean age, 63.8 years ± 8.9 [standard deviation]; 118 females) with MRI at the time of diagnosis were included, of which 287 (51.7%) died after a median time of 14.40 (interquartile range, 22.23) months, and had median followed up of 32.47 (interquartile range, 61.5) months. The developed risk prediction framework required 1.11 min on average and yielded C-indices of 0.8503 and 0.8234 in the development and independent validation cohorts, respectively, outperforming conventional clinical staging systems. Predicted risk scores were significantly associated with OS (p < .00001 in both cohorts). CONCLUSIONS: Machine learning reliably, rapidly, and reproducibly predicts mortality risk in patients with hepatocellular carcinoma from data routinely acquired in clinical practice. CLINICAL RELEVANCE STATEMENT: Precision mortality risk prediction using routinely available standard-of-care clinical data and automated MRI radiomic features could enable personalized follow-up strategies, guide management decisions, and improve clinical workflow efficiency in tumor boards. KEY POINTS: ⢠Machine learning enables hepatocellular carcinoma mortality risk prediction using standard-of-care clinical data and automated radiomic features from multiphasic contrast-enhanced MRI. ⢠Automated mortality risk prediction achieved state-of-the-art performances for mortality risk quantification and outperformed conventional clinical staging systems. ⢠Patients were stratified into low, intermediate, and high-risk groups with significantly different survival times, generalizable to an independent evaluation cohort.
Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Aprendizado de Máquina , Imageamento por Ressonância Magnética , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/mortalidade , Feminino , Masculino , Carcinoma Hepatocelular/diagnóstico por imagem , Carcinoma Hepatocelular/mortalidade , Pessoa de Meia-Idade , Estudos Retrospectivos , Prognóstico , Imageamento por Ressonância Magnética/métodos , Meios de Contraste , Idoso , Medição de Risco/métodosRESUMO
OBJECTIVES: To develop and evaluate a deep convolutional neural network (DCNN) for automated liver segmentation, volumetry, and radiomic feature extraction on contrast-enhanced portal venous phase magnetic resonance imaging (MRI). MATERIALS AND METHODS: This retrospective study included hepatocellular carcinoma patients from an institutional database with portal venous MRI. After manual segmentation, the data was randomly split into independent training, validation, and internal testing sets. From a collaborating institution, de-identified scans were used for external testing. The public LiverHccSeg dataset was used for further external validation. A 3D DCNN was trained to automatically segment the liver. Segmentation accuracy was quantified by the Dice similarity coefficient (DSC) with respect to manual segmentation. A Mann-Whitney U test was used to compare the internal and external test sets. Agreement of volumetry and radiomic features was assessed using the intraclass correlation coefficient (ICC). RESULTS: In total, 470 patients met the inclusion criteria (63.9±8.2 years; 376 males) and 20 patients were used for external validation (41±12 years; 13 males). DSC segmentation accuracy of the DCNN was similarly high between the internal (0.97±0.01) and external (0.96±0.03) test sets (p=0.28) and demonstrated robust segmentation performance on public testing (0.93±0.03). Agreement of liver volumetry was satisfactory in the internal (ICC, 0.99), external (ICC, 0.97), and public (ICC, 0.85) test sets. Radiomic features demonstrated excellent agreement in the internal (mean ICC, 0.98±0.04), external (mean ICC, 0.94±0.10), and public (mean ICC, 0.91±0.09) datasets. CONCLUSION: Automated liver segmentation yields robust and generalizable segmentation performance on MRI data and can be used for volumetry and radiomic feature extraction. CLINICAL RELEVANCE STATEMENT: Liver volumetry, anatomic localization, and extraction of quantitative imaging biomarkers require accurate segmentation, but manual segmentation is time-consuming. A deep convolutional neural network demonstrates fast and accurate segmentation performance on T1-weighted portal venous MRI. KEY POINTS: ⢠This deep convolutional neural network yields robust and generalizable liver segmentation performance on internal, external, and public testing data. ⢠Automated liver volumetry demonstrated excellent agreement with manual volumetry. ⢠Automated liver segmentations can be used for robust and reproducible radiomic feature extraction.
Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Imageamento por Ressonância Magnética , Humanos , Masculino , Imageamento por Ressonância Magnética/métodos , Feminino , Pessoa de Meia-Idade , Neoplasias Hepáticas/diagnóstico por imagem , Estudos Retrospectivos , Carcinoma Hepatocelular/diagnóstico por imagem , Adulto , Redes Neurais de Computação , Fígado/diagnóstico por imagem , Meios de Contraste , Idoso , RadiômicaRESUMO
OBJECTIVE: To compute a dense prostate cancer risk map for the individual patient post-biopsy from magnetic resonance imaging (MRI) and to provide a more reliable evaluation of its fitness in prostate regions that were not identified as suspicious for cancer by a human-reader in pre- and intra-biopsy imaging analysis. METHODS: Low-level pre-biopsy MRI biomarkers from targeted and non-targeted biopsy locations were extracted and statistically tested for representativeness against biomarkers from non-biopsied prostate regions. A probabilistic machine learning classifier was optimized to map biomarkers to their core-level pathology, followed by extrapolation of pathology scores to non-biopsied prostate regions. Goodness-of-fit was assessed at targeted and non-targeted biopsy locations for the post-biopsy individual patient. RESULTS: Our experiments showed high predictability of imaging biomarkers in differentiating histopathology scores in thousands of non-targeted core-biopsy locations (ROC-AUCs: 0.85-0.88), but also high variability between patients (Median ROC-AUC [IQR]: 0.81-0.89 [0.29-0.40]). CONCLUSION: The sparseness of prostate biopsy data makes the validation of a whole gland risk mapping a non-trivial task. Previous studies i) focused on targeted-biopsy locations although biopsy-specimens drawn from systematically scattered locations across the prostate constitute a more representative sample to non-biopsied regions, and ii) estimated prediction-power across predicted instances (e.g., biopsy specimens) with no patient distinction, which may lead to unreliable estimation of model fitness to the individual patient due to variation between patients in instance count, imaging characteristics, and pathologies. SIGNIFICANCE: This study proposes a personalized whole-gland prostate cancer risk mapping post-biopsy to allow clinicians to better stage and personalize focal therapy treatment plans.
Assuntos
Neoplasias da Próstata , Masculino , Humanos , Biópsia com Agulha de Grande Calibre/métodos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Biópsia Guiada por Imagem/métodos , Imageamento por Ressonância Magnética/métodos , BiomarcadoresRESUMO
Prostate cancer lesion segmentation in multi-parametric magnetic resonance imaging (mpMRI) is crucial for pre-biopsy diagnosis and targeted biopsy guidance. Deep convolution neural networks have been widely utilized for lesion segmentation. However, these methods fail to achieve a high Dice coefficient because of the large variations in lesion size and location within the gland. To address this problem, we integrate the clinically-meaningful prostate specific antigen density (PSAD) biomarker into the deep learning model using feature-wise transformations to condition the features in latent space, and thus control the size of lesion prediction. We tested our models on a public dataset with 214 annotated mpMRI scans and compared the segmentation performance to a baseline 3D U-Net model. Results demonstrate that integrating the PSAD biomarker significantly improves segmentation performance in both Dice coefficient and centroid distance metric.
RESUMO
Head motion occurring during brain positron emission tomography images acquisition leads to a decrease in image quality and induces quantification errors. We have previously introduced a Deep Learning Head Motion Correction (DL-HMC) method based on supervised learning of gold-standard Polaris Vicra motion tracking device and showed the potential of this method. In this study, we upgrade our network to a multi-task architecture in order to include image appearance prediction in the learning process. This multi-task Deep Learning Head Motion Correction (mtDL-HMC) model was trained on 21 subjects and showed enhanced motion prediction performance compared to our previous DL-HMC method on both quantitative and qualitative results for 5 testing subjects. We also evaluate the trustworthiness of network predictions by performing Monte Carlo Dropout at inference on testing subjects. We discard the data associated with a great motion prediction uncertainty and show that this does not harm the quality of reconstructed images, and can even improve it.
RESUMO
Accurate segmentation of liver and tumor regions in medical imaging is crucial for the diagnosis, treatment, and monitoring of hepatocellular carcinoma (HCC) patients. However, manual segmentation is time-consuming and subject to inter- and intra-rater variability. Therefore, automated methods are necessary but require rigorous validation of high-quality segmentations based on a consensus of raters. To address the need for reliable and comprehensive data in this domain, we present LiverHccSeg, a dataset that provides liver and tumor segmentations on multiphasic contrast-enhanced magnetic resonance imaging from two board-approved abdominal radiologists, along with an analysis of inter-rater agreement. LiverHccSeg provides a curated resource for liver and HCC tumor segmentation tasks. The dataset includes a scientific reading and co-registered contrast-enhanced multiphasic magnetic resonance imaging (MRI) scans with corresponding manual segmentations by two board-approved abdominal radiologists and relevant metadata and offers researchers a comprehensive foundation for external validation, and benchmarking of liver and tumor segmentation algorithms. The dataset also provides an analysis of the agreement between the two sets of liver and tumor segmentations. Through the calculation of appropriate segmentation metrics, we provide insights into the consistency and variability in liver and tumor segmentations among the radiologists. A total of 17 cases were included for liver segmentation and 14 cases for HCC tumor segmentation. Liver segmentations demonstrates high segmentation agreement (mean Dice, 0.95 ± 0.01 [standard deviation]) and HCC tumor segmentations showed higher variation (mean Dice, 0.85 ± 0.16 [standard deviation]). The applications of LiverHccSeg can be manifold, ranging from testing machine learning algorithms on public external data to radiomic feature analyses. Leveraging the inter-rater agreement analysis within the dataset, researchers can investigate the impact of variability on segmentation performance and explore methods to enhance the accuracy and robustness of liver and tumor segmentation algorithms in HCC patients. By making this dataset publicly available, LiverHccSeg aims to foster collaborations, facilitate innovative solutions, and ultimately improve patient outcomes in the diagnosis and treatment of HCC.
RESUMO
Whole-body dynamic FDG-PET imaging through continuous-bed-motion (CBM) mode multi-pass acquisition protocol is a promising metabolism measurement. However, inter-pass misalignment originating from body movement could degrade parametric quantification. We aim to apply a non-rigid registration method for inter-pass motion correction in whole-body dynamic PET. 27 subjects underwent a 90-min whole-body FDG CBM PET scan on a Biograph mCT (Siemens Healthineers), acquiring 9 over-the-heart single-bed passes and subsequently 19 CBM passes (frames). The inter-pass motion correction was executed using non-rigid image registration with multi-resolution, B-spline free-form deformations. The parametric images were then generated by Patlak analysis. The overlaid Patlak slope Ki and y-intercept Vb images were visualized to qualitatively evaluate motion impact and correction effect. The normalized weighted mean squared Patlak fitting errors (NFE) were compared in the whole body, head, and hypermetabolic regions of interest (ROI). In Ki images, ROI statistics were collected and malignancy discrimination capacity was estimated by the area under the receiver operating characteristic curve (AUC). After the inter-pass motion correction was applied, the spatial misalignment appearance between Ki and Vb images was successfully reduced. Voxel-wise normalized fitting error maps showed global error reduction after motion correction. The NFE in the whole body (p = 0.0013), head (p = 0.0021), and ROIs (p = 0.0377) significantly decreased. The visual performance of each hypermetabolic ROI in Ki images was enhanced, while 3.59% and 3.67% average absolute percentage changes were observed in mean and maximum Ki values, respectively, across all evaluated ROIs. The estimated mean Ki values had substantial changes with motion correction (p = 0.0021). The AUC of both mean Ki and maximum Ki after motion correction increased, possibly suggesting the potential of enhancing oncological discrimination capacity through inter-pass motion correction.
RESUMO
Myocardial perfusion imaging (MPI) using single-photon emission computed tomography (SPECT) is widely applied for the diagnosis of cardiovascular diseases. Attenuation maps (µ-maps) derived from computed tomography (CT) are utilized for attenuation correction (AC) to improve the diagnostic accuracy of cardiac SPECT. However, in clinical practice, SPECT and CT scans are acquired sequentially, potentially inducing misregistration between the two images and further producing AC artifacts. Conventional intensity-based registration methods show poor performance in the cross-modality registration of SPECT and CT-derived µ-maps since the two imaging modalities might present totally different intensity patterns. Deep learning has shown great potential in medical imaging registration. However, existing deep learning strategies for medical image registration encoded the input images by simply concatenating the feature maps of different convolutional layers, which might not fully extract or fuse the input information. In addition, deep-learning-based cross-modality registration of cardiac SPECT and CT-derived µ-maps has not been investigated before. In this paper, we propose a novel Dual-Channel Squeeze-Fusion-Excitation (DuSFE) co-attention module for the cross-modality rigid registration of cardiac SPECT and CT-derived µ-maps. DuSFE is designed based on the co-attention mechanism of two cross-connected input data streams. The channel-wise or spatial features of SPECT and µ-maps are jointly encoded, fused, and recalibrated in the DuSFE module. DuSFE can be flexibly embedded at multiple convolutional layers to enable gradual feature fusion in different spatial dimensions. Our studies using clinical patient MPI studies demonstrated that the DuSFE-embedded neural network generated significantly lower registration errors and more accurate AC SPECT images than existing methods. We also showed that the DuSFE-embedded network did not over-correct or degrade the registration performance of motion-free cases. The source code of this work is available at https://github.com/XiongchaoChen/DuSFE_CrossRegistration.
Assuntos
Tomografia Computadorizada de Emissão de Fóton Único , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Tomografia Computadorizada de Emissão de Fóton Único/métodos , Coração , Imagens de Fantasmas , Processamento de Imagem Assistida por Computador/métodosRESUMO
OBJECTIVES: The objective of this study was to translate a deep learning (DL) approach for semiautomated analysis of body composition (BC) measures from standard of care CT images to investigate the prognostic value of BC in pediatric, adolescent, and young adult (AYA) patients with lymphoma. METHODS: This 10-year retrospective, single-site study of 110 pediatric and AYA patients with lymphoma involved manual segmentation of fat and muscle tissue from 260 CT imaging datasets obtained as part of routine imaging at initial staging and first therapeutic follow-up. A DL model was trained to perform semiautomated image segmentation of adipose and muscle tissue. The association between BC measures and the occurrence of 3-year late effects was evaluated using Cox proportional hazards regression analyses. RESULTS: DL-guided measures of BC were in close agreement with those obtained by a human rater, as demonstrated by high Dice scores (≥ 0.95) and correlations (r > 0.99) for each tissue of interest. Cox proportional hazards regression analyses revealed that patients with elevated subcutaneous adipose tissue at baseline and first follow-up, along with patients who possessed lower volumes of skeletal muscle at first follow-up, have increased risk of late effects compared to their peers. CONCLUSIONS: DL provides rapid and accurate quantification of image-derived measures of BC that are associated with risk for treatment-related late effects in pediatric and AYA patients with lymphoma. Image-based monitoring of BC measures may enhance future opportunities for personalized medicine for children with lymphoma by identifying patients at the highest risk for late effects of treatment. KEY POINTS: ⢠Deep learning-guided CT image analysis of body composition measures achieved high agreement level with manual image analysis. ⢠Pediatric patients with more fat and less muscle during the course of cancer treatment were more likely to experience a serious adverse event compared to their clinical counterparts. ⢠Deep learning of body composition may add value to routine CT imaging by offering real-time monitoring of pediatric, adolescent, and young adults at high risk for late effects of cancer treatment.
Assuntos
Composição Corporal , Aprendizado Profundo , Linfoma , Adolescente , Criança , Humanos , Progressão da Doença , Linfoma/diagnóstico por imagem , Estudos Retrospectivos , Tomografia Computadorizada por Raios X , Masculino , Feminino , Modelos de Riscos Proporcionais , Valor Preditivo dos TestesRESUMO
Myocardial ischemia/infarction causes wall-motion abnormalities in the left ventricle. Therefore, reliable motion estimation and strain analysis using 3D+time echocardiography for localization and characterization of myocardial injury is valuable for early detection and targeted interventions. Previous unsupervised cardiac motion tracking methods rely on heavily-weighted regularization functions to smooth out the noisy displacement fields in echocardiography. In this work, we present a Co-Attention Spatial Transformer Network (STN) for improved motion tracking and strain analysis in 3D echocardiography. Co-Attention STN aims to extract inter-frame dependent features between frames to improve the motion tracking in otherwise noisy 3D echocardiography images. We also propose a novel temporal constraint to further regularize the motion field to produce smooth and realistic cardiac displacement paths over time without prior assumptions on cardiac motion. Our experimental results on both synthetic and in vivo 3D echocardiography datasets demonstrate that our Co-Attention STN provides superior performance compared to existing methods. Strain analysis from Co-Attention STNs also correspond well with the matched SPECT perfusion maps, demonstrating the clinical utility for using 3D echocardiography for infarct localization.
Assuntos
Ecocardiografia Tridimensional , Infarto do Miocárdio , Disfunção Ventricular Esquerda , Humanos , Coração , Ecocardiografia Tridimensional/métodos , Ecocardiografia/métodosRESUMO
Objective. In PET/CT imaging, CT is used for positron emission tomography (PET) attenuation correction (AC). CT artifacts or misalignment between PET and CT can cause AC artifacts and quantification errors in PET. Simultaneous reconstruction (MLAA) of PET activity (λ-MLAA) and attenuation (µ-MLAA) maps was proposed to solve those issues using the time-of-flight PET raw data only. However,λ-MLAA still suffers from quantification error as compared to reconstruction using the gold-standard CT-based attenuation map (µ-CT). Recently, a deep learning (DL)-based framework was proposed to improve MLAA by predictingµ-DL fromλ-MLAA andµ-MLAA using an image domain loss function (IM-loss). However, IM-loss does not directly measure the AC errors according to the PET attenuation physics. Our preliminary studies showed that an additional physics-based loss function can lead to more accurate PET AC. The main objective of this study is to optimize the attenuation map generation framework for clinical full-dose18F-FDG studies. We also investigate the effectiveness of the optimized network on predicting attenuation maps for synthetic low-dose oncological PET studies.Approach. We optimized the proposed DL framework by applying different preprocessing steps and hyperparameter optimization, including patch size, weights of the loss terms and number of angles in the projection-domain loss term. The optimization was performed based on 100 skull-to-toe18F-FDG PET/CT scans with minimal misalignment. The optimized framework was further evaluated on 85 clinical full-dose neck-to-thigh18F-FDG cancer datasets as well as synthetic low-dose studies with only 10% of the full-dose raw data.Main results. Clinical evaluation of tumor quantification as well as physics-based figure-of-merit metric evaluation validated the promising performance of our proposed method. For both full-dose and low-dose studies, the proposed framework achieved <1% error in tumor standardized uptake value measures.Significance. It is of great clinical interest to achieve CT-less PET reconstruction, especially for low-dose PET studies.
Assuntos
Aprendizado Profundo , Neoplasias , Humanos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Imagem Multimodal/métodos , Processamento de Imagem Assistida por Computador/métodos , Fluordesoxiglucose F18 , Imageamento por Ressonância Magnética/métodos , Algoritmos , Tomografia por Emissão de Pósitrons/métodosRESUMO
Head motion correction is an essential component of brain PET imaging, in which even motion of small magnitude can greatly degrade image quality and introduce artifacts. Building upon previous work, we propose a new head motion correction framework taking fast reconstructions as input. The main characteristics of the proposed method are: (i) the adoption of a high-resolution short-frame fast reconstruction workflow; (ii) the development of a novel encoder for PET data representation extraction; and (iii) the implementation of data augmentation techniques. Ablation studies are conducted to assess the individual contributions of each of these design choices. Furthermore, multi-subject studies are conducted on an 18F-FPEB dataset, and the method performance is qualitatively and quantitatively evaluated by MOLAR reconstruction study and corresponding brain Region of Interest (ROI) Standard Uptake Values (SUV) evaluation. Additionally, we also compared our method with a conventional intensity-based registration method. Our results demonstrate that the proposed method outperforms other methods on all subjects, and can accurately estimate motion for subjects out of the training set. All code is publicly available on GitHub: https://github.com/OnofreyLab/dl-hmc_fast_recon_miccai2023.
RESUMO
Head movement during long scan sessions degrades the quality of reconstruction in positron emission tomography (PET) and introduces artifacts, which limits clinical diagnosis and treatment. Recent deep learning-based motion correction work utilized raw PET list-mode data and hardware motion tracking (HMT) to learn head motion in a supervised manner. However, motion prediction results were not robust to testing subjects outside the training data domain. In this paper, we integrate a cross-attention mechanism into the supervised deep learning network to improve motion correction across test subjects. Specifically, cross-attention learns the spatial correspondence between the reference images and moving images to explicitly focus the model on the most correlative inherent information - the head region the motion correction. We validate our approach on brain PET data from two different scanners: HRRT without time of flight (ToF) and mCT with ToF. Compared with traditional and deep learning benchmarks, our network improved the performance of motion correction by 58% and 26% in translation and rotation, respectively, in multi-subject testing in HRRT studies. In mCT studies, our approach improved performance by 66% and 64% for translation and rotation, respectively. Our results demonstrate that cross-attention has the potential to improve the quality of brain PET image reconstruction without the dependence on HMT. All code will be released on GitHub: https://github.com/OnofreyLab/dl_hmc_attention_mlcn2023.
RESUMO
Head motion presents a continuing problem in brain PET studies. A wealth of motion correction (MC) algorithms had been proposed in the past, including both hardware-based methods and data-driven methods. However, in most real brain PET studies, in the absence of ground truth or gold standard of motion information, it is challenging to objectively evaluate MC quality. For MC evaluation, image-domain metrics, e.g., standardized uptake value (SUV) change before and after MC are commonly used, but this measure lacks objectivity because 1) other factors, e.g., attenuation correction, scatter correction and parameters used in the reconstruction, will confound MC effectiveness; 2) SUV only reflects final image quality, and it cannot precisely inform when an MC method performed well or poorly during the scan time period; 3) SUV is tracer-dependent and head motion may cause increases or decreases in SUV for different tracers, so evaluating MC effectiveness is complicated. Here, we present a new algorithm, i.e., motion corrected centroid-of-distribution (MCCOD) to perform objective quality control for measured or estimated rigid motion information. MCCOD is a three-dimensional surrogate trace of the center of tracer distribution after performing rigid MC using the existing motion information. MCCOD is used to inform whether the motion information is accurate, using the PET raw data only, i.e., without PET image reconstruction, where inaccurate motion information typically leads to abrupt changes in the MCCOD trace. MCCOD was validated using simulation studies and was tested on real studies acquired from both time-of-flight (TOF) and non-TOF scanners. A deep learning-based brain mask segmentation was implemented, which is shown to be necessary for non-TOF MCCOD generation. MCCOD is shown to be effective in detecting abrupt translation motion errors in slowly varying tracer distribution caused by the motion tracking hardware and can be used to compare different motion estimation methods as well as to improve existing motion information.
Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Movimento (Física) , Algoritmos , Encéfalo/diagnóstico por imagemRESUMO
PURPOSE: There is ongoing clinical need to improve estimates of disease outcome in prostate cancer. Machine learning (ML) approaches to pathologic diagnosis and prognosis are a promising and increasingly used strategy. In this study, we use an ML algorithm for prediction of adverse outcomes at radical prostatectomy (RP) using whole-slide images (WSIs) of prostate biopsies with Grade Group (GG) 2 or 3 disease. METHODS: We performed a retrospective review of prostate biopsies collected at our institution which had corresponding RP, GG 2 or 3 disease one or more cores, and no biopsies with higher than GG 3 disease. A hematoxylin and eosin-stained core needle biopsy from each site with GG 2 or 3 disease was scanned and used as the sole input for the algorithm. The ML pipeline had three phases: image preprocessing, feature extraction, and adverse outcome prediction. First, patches were extracted from each biopsy scan. Subsequently, the pre-trained Visual Geometry Group-16 convolutional neural network was used for feature extraction. A representative feature vector was then used as input to an Extreme Gradient Boosting classifier for predicting the binary adverse outcome. We subsequently assessed patient clinical risk using CAPRA score for comparison with the ML pipeline results. RESULTS: The data set included 361 WSIs from 107 patients (56 with adverse pathology at RP). The area under the receiver operating characteristic curves for the ML classification were 0.72 (95% CI, 0.62 to 0.81), 0.65 (95% CI, 0.53 to 0.79) and 0.89 (95% CI, 0.79 to 1.00) for the entire cohort, and GG 2 and GG 3 patients, respectively, similar to the performance of the CAPRA clinical risk assessment. CONCLUSION: We provide evidence for the potential of ML algorithms to use WSIs of needle core prostate biopsies to estimate clinically relevant prostate cancer outcomes.
Assuntos
Próstata , Neoplasias da Próstata , Biópsia , Biópsia com Agulha de Grande Calibre , Amarelo de Eosina-(YS) , Hematoxilina , Humanos , Aprendizado de Máquina , Masculino , Próstata/patologia , Próstata/cirurgia , Prostatectomia , Neoplasias da Próstata/diagnóstico , Neoplasias da Próstata/patologia , Neoplasias da Próstata/cirurgiaRESUMO
Head motion during PET scans causes image quality degradation, decreased concentration in regions with high uptake and incorrect outcome measures from kinetic analysis of dynamic datasets. Previously, we proposed a data-driven method, center of tracer distribution (COD), to detect head motion without an external motion tracking device. There, motion was detected using one dimension of the COD trace with a semiautomatic detection algorithm, requiring multiple user defined parameters and manual intervention. In this study, we developed a new data-driven motion detection algorithm, which is automatic, self-adaptive to noise level, does not require user-defined parameters and uses all three dimensions of the COD trace (3DCOD). 3DCOD was first validated and tested using 30 simulation studies (18F-FDG, N = 15; 11C-raclopride (RAC), N = 15) with large motion. The proposed motion correction method was tested on 22 real human datasets, with 20 acquired from a high resolution research tomograph (HRRT) scanner (18F-FDG, N = 10; 11C-RAC, N = 10) and 2 acquired from the Siemens Biograph mCT scanner. Real-time hardware-based motion tracking information (Vicra) was available for all real studies and was used as the gold standard. 3DCOD was compared to Vicra, no motion correction (NMC), one-direction COD (our previous method called 1DCOD) and two conventional frame-based image registration (FIR) algorithms, i.e., FIR1 (based on predefined frames reconstructed with attenuation correction) and FIR2 (without attenuation correction) for both simulation and real studies. For the simulation studies, 3DCOD yielded -2.3 ± 1.4% (mean ± standard deviation across all subjects and 11 brain regions) error in region of interest (ROI) uptake for 18F-FDG (-3.4 ± 1.7% for 11C-RAC across all subjects and 2 regions) as compared to Vicra (perfect correction) while NMC, FIR1, FIR2 and 1DCOD yielded -25.4 ± 11.1% (-34.5 ± 16.1% for 11C- RAC), -13.4 ± 3.5% (-16.1 ± 4.6%), -5.7 ± 3.6% (-8.0 ± 4.5%) and -2.6 ± 1.5% (-5.1 ± 2.7%), respectively. For real HRRT studies, 3DCOD yielded -0.3 ± 2.8% difference for 18F-FDG (-0.4 ± 3.2% for 11C-RAC) as compared to Vicra while NMC, FIR1, FIR2 and 1DCOD yielded -14.9 ± 9.0% (-24.5 ± 14.6%), -3.6 ± 4.9% (-13.4 ± 14.3%), -0.6 ± 3.4% (-6.7 ± 5.3%) and -1.5 ± 4.2% (-2.2 ± 4.1%), respectively. In summary, the proposed motion correction method yielded comparable performance to the hardware-based motion tracking method for multiple tracers, including very challenging cases with large frequent head motion, in studies performed on a non-TOF scanner.
Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Algoritmos , Encéfalo/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Cinética , Movimento (Física) , Movimento , Tomografia por Emissão de Pósitrons/métodosRESUMO
A novel deep learning (DL)-based attenuation correction (AC) framework was applied to clinical whole-body oncology studies using 18F-FDG, 68 Ga-DOTATATE, and 18F-Fluciclovine. The framework used activity (λ-MLAA) and attenuation (µ-MLAA) maps estimated by the maximum likelihood reconstruction of activity and attenuation (MLAA) algorithm as inputs to a modified U-net neural network with a novel imaging physics-based loss function to learn a CT-derived attenuation map (µ-CT). METHODS: Clinical whole-body PET/CT datasets of 18F-FDG (N = 113), 68 Ga-DOTATATE (N = 76), and 18F-Fluciclovine (N = 90) were used to train and test tracer-specific neural networks. For each tracer, forty subjects were used to train the neural network to predict attenuation maps (µ-DL). µ-DL and µ-MLAA were compared to the gold-standard µ-CT. PET images reconstructed using the OSEM algorithm with µ-DL (OSEMDL) and µ-MLAA (OSEMMLAA) were compared to the CT-based reconstruction (OSEMCT). Tumor regions of interest were segmented by two radiologists and tumor SUV and volume measures were reported, as well as evaluation using conventional image analysis metrics. RESULTS: µ-DL yielded high resolution and fine detail recovery of the attenuation map, which was superior in quality as compared to µ-MLAA in all metrics for all tracers. Using OSEMCT as the gold-standard, OSEMDL provided more accurate tumor quantification than OSEMMLAA for all three tracers, e.g., error in SUVmax for OSEMMLAA vs. OSEMDL: - 3.6 ± 4.4% vs. - 1.7 ± 4.5% for 18F-FDG (N = 152), - 4.3 ± 5.1% vs. 0.4 ± 2.8% for 68 Ga-DOTATATE (N = 70), and - 7.3 ± 2.9% vs. - 2.8 ± 2.3% for 18F-Fluciclovine (N = 44). OSEMDL also yielded more accurate tumor volume measures than OSEMMLAA, i.e., - 8.4 ± 14.5% (OSEMMLAA) vs. - 3.0 ± 15.0% for 18F-FDG, - 14.1 ± 19.7% vs. 1.8 ± 11.6% for 68 Ga-DOTATATE, and - 15.9 ± 9.1% vs. - 6.4 ± 6.4% for 18F-Fluciclovine. CONCLUSIONS: The proposed framework provides accurate and robust attenuation correction for whole-body 18F-FDG, 68 Ga-DOTATATE and 18F-Fluciclovine in tumor SUV measures as well as tumor volume estimation. The proposed method provides clinically equivalent quality as compared to CT in attenuation correction for the three tracers.
Assuntos
Aprendizado Profundo , Neoplasias , Fluordesoxiglucose F18 , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Tomografia por Emissão de Pósitrons , Cintilografia , Compostos RadiofarmacêuticosRESUMO
Segmentation of the prostate into specific anatomical zones is important for radiological assessment of prostate cancer in magnetic resonance imaging (MRI). Of particular interest is segmenting the prostate into two regions of interest: the central gland (CG) and peripheral zone (PZ). In this paper, we propose to integrate an anatomical atlas of prostate zone shape into a deep learning semantic segmentation framework to segment the CG and PZ in T2-weighted MRI. Our approach incorporates anatomical information in the form of a probabilistic prostate zone atlas and utilizes a dynamically controlled hyperparameter to combine the atlas with the semantic segmentation result. In addition to providing significantly improved segmentation performance, this hyperparameter is capable of being dynamically adjusted during the inference stage to provide users with a mechanism to refine the segmentation. We validate our approach using an external test dataset and demonstrate Dice similarity coefficient values (mean±SD) of 0.91±0.05 for the CG and 0.77±0.16 for the PZ that significantly improves upon the baseline segmentation results without the atlas. All code is publicly available on GitHub: https://github.com/OnofreyLab/prostate_atlas_segm_miccai2022.
RESUMO
Head movement is a major limitation in brain positron emission tomography (PET) imaging, which results in image artifacts and quantification errors. Head motion correction plays a critical role in quantitative image analysis and diagnosis of nervous system diseases. However, to date, there is no approach that can track head motion continuously without using an external device. Here, we develop a deep learning-based algorithm to predict rigid motion for brain PET by lever-aging existing dynamic PET scans with gold-standard motion measurements from external Polaris Vicra tracking. We propose a novel Deep Learning for Head Motion Correction (DL-HMC) methodology that consists of three components: (i) PET input data encoder layers; (ii) regression layers to estimate the six rigid motion transformation parameters; and (iii) feature-wise transformation (FWT) layers to condition the network to tracer time-activity. The input of DL-HMC is sampled pairs of one-second 3D cloud representations of the PET data and the output is the prediction of six rigid transformation motion parameters. We trained this network in a supervised manner using the Vicra motion tracking information as gold-standard. We quantitatively evaluate DL-HMC by comparing to gold-standard Vicra measurements and qualitatively evaluate the reconstructed images as well as perform region of interest standard uptake value (SUV) measurements. An algorithm ablation study was performed to determine the contributions of each of our DL-HMC design choices to network performance. Our results demonstrate accurate motion prediction performance for brain PET using a data-driven registration approach without external motion tracking hardware. All code is publicly available on GitHub: https://github.com/OnofreyLab/dl-hmc_miccai2022.