Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
1.
Phys Med Biol ; 69(4)2024 Feb 15.
Article in English | MEDLINE | ID: mdl-38252969

ABSTRACT

Objective. Simultaneous PET/MR scanners combine the high sensitivity of MR imaging with the functional imaging of PET. However, attenuation correction of breast PET/MR imaging is technically challenging. The purpose of this study is to establish a robust attenuation correction algorithm for breast PET/MR images that relies on deep learning (DL) to recreate the missing portions of the patient's anatomy (truncation completion), as well as to provide bone information for attenuation correction from only the PET data.Approach. Data acquired from 23 female subjects with invasive breast cancer scanned with18F-fluorodeoxyglucose PET/CT and PET/MR localized to the breast region were used for this study. Three DL models, U-Net with mean absolute error loss (DLMAE) model, U-Net with mean squared error loss (DLMSE) model, and U-Net with perceptual loss (DLPerceptual) model, were trained to predict synthetic CT images (sCT) for PET attenuation correction (AC) given non-attenuation corrected (NAC) PETPET/MRimages as inputs. The DL and Dixon-based sCT reconstructed PET images were compared against those reconstructed from CT images by calculating the percent error of the standardized uptake value (SUV) and conducting Wilcoxon signed rank statistical tests.Main results. sCT images from the DLMAEmodel, the DLMSEmodel, and the DLPerceptualmodel were similar in mean absolute error (MAE), peak-signal-to-noise ratio, and normalized cross-correlation. No significant difference in SUV was found between the PET images reconstructed using the DLMSEand DLPerceptualsCTs compared to the reference CT for AC in all tissue regions. All DL methods performed better than the Dixon-based method according to SUV analysis.Significance. A 3D U-Net with MSE or perceptual loss model can be implemented into a reconstruction workflow, and the derived sCT images allow successful truncation completion and attenuation correction for breast PET/MR images.


Subject(s)
Deep Learning , Positron Emission Tomography Computed Tomography , Humans , Female , Image Processing, Computer-Assisted/methods , Positron-Emission Tomography/methods , Magnetic Resonance Imaging/methods
2.
Radiol Imaging Cancer ; 3(1): e200091, 2021 01.
Article in English | MEDLINE | ID: mdl-33575660

ABSTRACT

Purpose: To compare the measurement of glucose uptake in primary invasive breast cancer using simultaneous, time-of-flight breast PET/MRI with prone time-of-flight PET/CT. Materials and Methods: In this prospective study, women with biopsy-proven invasive breast cancer undergoing preoperative breast MRI from 2016 to 2018 were eligible. Participants who had fasted underwent prone PET/CT of the breasts approximately 60 minutes after injection of 370 MBq (10 mCi) fluorine 18 fluorodeoxyglucose (18F-FDG) followed by prone PET/MRI using standard clinical breast MRI sequences performed simultaneously with PET acquisition. Volumes of interest were drawn for tumors and contralateral normal breast fibroglandular tissue to calculate standardized uptake values (SUVs). Spearman correlation, Wilcoxon signed ranked test, Mann-Whitney test, and Bland-Altman analyses were performed. Results: Twenty-three women (mean age, 50 years; range, 33-70 years) were included. Correlation between tumor uptake values measured with PET/MRI and PET/CT was strong (r s = 0.95-0.98). No difference existed between modalities for tumor maximum SUV (SUVmax) normalized to normal breast tissue SUVmean (normSUVmax) (P = .58). The least amount of measurement bias was observed with normSUVmax, +3.86% (95% limits of agreement: -28.92, +36.64). Conclusion: These results demonstrate measurement agreement between PET/CT, the current reference standard for tumor glucose uptake quantification, and simultaneous time-of-flight breast 18F-FDG PET/MRI.Keywords: Breast, Comparative Studies, PET/CT, PET/MR Supplemental material is available for this article. © RSNA, 2021See also the commentary by Mankoff and Surti in this issue.


Subject(s)
Breast Neoplasms , Positron Emission Tomography Computed Tomography , Breast Neoplasms/diagnostic imaging , Female , Glucose , Humans , Magnetic Resonance Imaging , Middle Aged , Multimodal Imaging , Positron-Emission Tomography , Prospective Studies , Radiopharmaceuticals
3.
J Am Coll Radiol ; 18(7): 992-999, 2021 07.
Article in English | MEDLINE | ID: mdl-33607067

ABSTRACT

PURPOSE: Incidental pulmonary embolism (IPE) can be found on body CT. The aim of this study was to evaluate the feasibility of using artificial intelligence to identify missed IPE on a large number of CT examinations. METHODS: This retrospective analysis included all single-phase chest, abdominal, and pelvic (CAP) and abdominal and pelvic (AP) CT examinations performed at a single center over 1 year, for indications other than identification of PE. Proprietary visual classification and natural language processing software was used to analyze images and reports from all CT examinations, followed by a two-step human adjudication process to classify cases as true positive, false positive, true negative, or false negative. Descriptive statistics were assessed for prevalence of IPE and features (subsegmental versus central, unifocal versus multifocal, right heart strain or not) of missed IPE. Interrater agreement for radiologist readers was also calculated. RESULTS: A total of 11,913 CT examinations (6,398 CAP, 5,515 AP) were included. Thirty false-negative examinations were identified on CAP (0.47%; 95% confidence interval [CI], 0.32%-0.67%) and nineteen false-negative studies on AP (0.34%; 95% CI, 0.21%-0.54%) studies. During manual review, readers showed substantial agreement for identification of IPE on CAP (κ = 0.76; 95% CI, 0.66-0.86) and nearly perfect agreement for identification of IPE on AP (κ = 0.86; 95% CI, 0.76-0.97). Forty-nine missed IPEs (0.41%; 95% CI, 0.30%-0.54%) were ultimately identified, compared with seventy-nine IPEs (0.66%; 95% CI, 0.53%-0.83%) identified at initial clinical interpretation. CONCLUSIONS: Artificial intelligence can efficiently analyze CT examinations to identify potential missed IPE. These results can inform peer-review efforts and quality control and could potentially be implemented in a prospective fashion.


Subject(s)
Artificial Intelligence , Pulmonary Embolism , Humans , Prevalence , Prospective Studies , Pulmonary Embolism/diagnostic imaging , Pulmonary Embolism/epidemiology , Quality Improvement , Retrospective Studies , Tomography, X-Ray Computed
4.
Phys Med Biol ; 65(23): 23NT03, 2020 12 23.
Article in English | MEDLINE | ID: mdl-33120371

ABSTRACT

There has been substantial interest in developing techniques for synthesizing CT-like images from MRI inputs, with important applications in simultaneous PET/MR and radiotherapy planning. Deep learning has recently shown great potential for solving this problem. The goal of this research was to investigate the capability of four common clinical MRI sequences (T1-weighted gradient-echo [T1], T2-weighted fat-suppressed fast spin-echo [T2-FatSat], post-contrast T1-weighted gradient-echo [T1-Post], and fast spin-echo T2-weighted fluid-attenuated inversion recovery [CUBE-FLAIR]) as inputs into a deep CT synthesis pipeline. Data were obtained retrospectively in 92 subjects who had undergone an MRI and CT scan on the same day. The patient's MR and CT scans were registered to one another using affine registration. The deep learning model was a convolutional neural network encoder-decoder with skip connections similar to the U-net architecture and Inception V3 inspired blocks instead of sequential convolution blocks. After training with 150 epochs and a batch size of 6, the model was evaluated using structural similarity index (SSIM), peak SNR (PSNR), mean absolute error (MAE), and dice coefficient. We found that feasible results were attainable for each image type, and no single image type was superior for all analyses. The MAE (in HU) of the resulting synthesized CT in the whole brain was 51.236 ± 4.504 for CUBE-FLAIR, 45.432 ± 8.517 for T1, 44.558 ± 7.478 for T1-Post, and 45.721 ± 8.7767 for T2, showing not only feasible, but also very compelling results on clinical images. Deep learning-based synthesis of CT images from MRI is possible with a wide range of inputs, suggesting that viable images can be created from a wide range of clinical input types.


Subject(s)
Brain Neoplasms/diagnostic imaging , Deep Learning , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neural Networks, Computer , Tomography, X-Ray Computed/methods , Humans , Retrospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL