RESUMO
PURPOSE: To develop a self-supervised learning method to retrospectively estimate T1 and T2 values from clinical weighted MRI. METHODS: A self-supervised learning approach was constructed to estimate T1, T2, and proton density maps from conventional T1- and T2-weighted images. MR physics models were employed to regenerate the weighted images from the network outputs, and the network was optimized based on loss calculated between the synthesized and input weighted images, alongside additional constraints based on prior information. The method was evaluated on healthy volunteer data, with conventional mapping as references. The reproducibility was examined on two 3.0T scanners. Performance in tumor characterization was inspected by applying the method to a public glioblastoma dataset. RESULTS: For T1 and T2 estimation from three weighted images (T1 MPRAGE, T1 gradient echo sequences, and T2 turbo spin echo), the deep learning method achieved global voxel-wise error ≤9% in brain parenchyma and regional error ≤12.2% in six types of brain tissues. The regional measurements obtained from two scanners showed mean differences ≤2.4% and correlation coefficients >0.98, demonstrating excellent reproducibility. In the 50 glioblastoma patients, the retrospective quantification results were in line with literature reports from prospective methods, and the T2 values were found to be higher in tumor regions, with sensitivity of 0.90 and specificity of 0.92 in a voxel-wise classification task between normal and abnormal regions. CONCLUSION: The self-supervised learning method is promising for retrospective T1 and T2 quantification from clinical MR images, with the potential to improve the availability of quantitative MRI and facilitate brain tumor characterization.
Assuntos
Neoplasias Encefálicas , Encéfalo , Glioblastoma , Imageamento por Ressonância Magnética , Humanos , Glioblastoma/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Neoplasias Encefálicas/diagnóstico por imagem , Estudos Retrospectivos , Reprodutibilidade dos Testes , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Masculino , Feminino , Adulto , Pessoa de Meia-Idade , Algoritmos , Aprendizado de Máquina Supervisionado , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , IdosoRESUMO
PURPOSE: To develop a deep learning method to synthesize conventional contrast-weighted images in the brain from MR multitasking spatial factors. METHODS: Eighteen subjects were imaged using a whole-brain quantitative T1 -T2 -T1ρ MR multitasking sequence. Conventional contrast-weighted images consisting of T1 MPRAGE, T1 gradient echo, and T2 fluid-attenuated inversion recovery were acquired as target images. A 2D U-Net-based neural network was trained to synthesize conventional weighted images from MR multitasking spatial factors. Quantitative assessment and image quality rating by two radiologists were performed to evaluate the quality of deep-learning-based synthesis, in comparison with Bloch-equation-based synthesis from MR multitasking quantitative maps. RESULTS: The deep-learning synthetic images showed comparable contrasts of brain tissues with the reference images from true acquisitions and were substantially better than the Bloch-equation-based synthesis results. Averaging on the three contrasts, the deep learning synthesis achieved normalized root mean square error = 0.184 ± 0.075, peak SNR = 28.14 ± 2.51, and structural-similarity index = 0.918 ± 0.034, which were significantly better than Bloch-equation-based synthesis (p < 0.05). Radiologists' rating results show that compared with true acquisitions, deep learning synthesis had no notable quality degradation and was better than Bloch-equation-based synthesis. CONCLUSION: A deep learning technique was developed to synthesize conventional weighted images from MR multitasking spatial factors in the brain, enabling the simultaneous acquisition of multiparametric quantitative maps and clinical contrast-weighted images in a single scan.
Assuntos
Aprendizado Profundo , Humanos , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodosRESUMO
PURPOSE: To develop a deep-learning-based method to quantify multiple parameters in the brain from conventional contrast-weighted images. METHODS: Eighteen subjects were imaged using an MR Multitasking sequence to generate reference T1 and T2 maps in the brain. Conventional contrast-weighted images consisting of T1 MPRAGE, T1 GRE, and T2 FLAIR were acquired as input images. A U-Net-based neural network was trained to estimate T1 and T2 maps simultaneously from the contrast-weighted images. Six-fold cross-validation was performed to compare the network outputs with the MR Multitasking references. RESULTS: The deep-learning T1 /T2 maps were comparable with the references, and brain tissue structures and image contrasts were well preserved. A peak signal-to-noise ratio >32 dB and a structural similarity index >0.97 were achieved for both parameter maps. Calculated on brain parenchyma (excluding CSF), the mean absolute errors (and mean percentage errors) for T1 and T2 maps were 52.7 ms (5.1%) and 5.4 ms (7.1%), respectively. ROI measurements on four tissue compartments (cortical gray matter, white matter, putamen, and thalamus) showed that T1 and T2 values provided by the network outputs were in agreement with the MR Multitasking reference maps. The mean differences were smaller than ± 1%, and limits of agreement were within ± 5% for T1 and within ± 10% for T2 after taking the mean differences into account. CONCLUSION: A deep-learning-based technique was developed to estimate T1 and T2 maps from conventional contrast-weighted images in the brain, enabling simultaneous qualitative and quantitative MRI without modifying clinical protocols.
Assuntos
Aprendizado Profundo , Encéfalo/diagnóstico por imagem , Substância Cinzenta , Humanos , Imageamento por Ressonância Magnética , Razão Sinal-RuídoRESUMO
Automatic segmentation of skin lesions is crucial for diagnosing and treating skin diseases. Although current medical image segmentation methods have significantly improved the results of skin lesion segmentation, the following major challenges still affect the segmentation performance: (i) segmentation targets have irregular shapes and diverse sizes and (ii) low contrast or blurred boundaries between lesions and background. To address these issues, this study proposes a Gated Fusion Attention Network (GFANet) which designs two progressive relation decoders to accurately segment skin lesions images. First, we use a Context Features Gated Fusion Decoder (CGFD) to fuse multiple levels of contextual features, and then a prediction result is generated as the initial guide map. Then, it is optimized by a prediction decoder consisting of a shape flow and a final Gated Convolution Fusion (GCF) module, where we iteratively use a set of Channel Reverse Attention (CRA) modules and GCF modules in the shape flow to combine the features of the current layer and the prediction results of the adjacent next layer to gradually extract boundary information. Finally, to speed up network convergence and improve segmentation accuracy, we use GCF to fuse low-level features from the encoder and the final output of the shape flow. To verify the effectiveness and advantages of the proposed GFANet, we conduct extensive experiments on four publicly available skin lesion datasets (International Skin Imaging Collaboration [ISIC] 2016, ISIC 2017, ISIC 2018, and PH2) and compare them with state-of-the-art methods. The experimental results show that the proposed GFANet achieves excellent segmentation performance in commonly used evaluation metrics, and the segmentation results are stable. The source code is available at https://github.com/ShiHanQ/GFANet.
Assuntos
Dermatopatias , Humanos , Pele , Benchmarking , Software , Processamento de Imagem Assistida por ComputadorRESUMO
Purpose: To develop a deep learning-based method to retrospectively quantify T2 from conventional T1- and T2-weighted images. Methods: Twenty-five subjects were imaged using a multi-echo spin-echo sequence to estimate reference prostate T2 maps. Conventional T1- and T2-weighted images were acquired as the input images. A U-Net based neural network was developed to directly estimate T2 maps from the weighted images using a four-fold cross-validation training strategy. The structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), mean percentage error (MPE), and Pearson correlation coefficient were calculated to evaluate the quality of network-estimated T2 maps. To explore the potential of this approach in clinical practice, a retrospective T2 quantification was performed on a high-risk prostate cancer cohort (Group 1) and a low-risk active surveillance cohort (Group 2). Tumor and non-tumor T2 values were evaluated by an experienced radiologist based on region of interest (ROI) analysis. Results: The T2 maps generated by the trained network were consistent with the corresponding reference. Prostate tissue structures and contrast were well preserved, with a PSNR of 26.41 ± 1.17â dB, an SSIM of 0.85 ± 0.02, and a Pearson correlation coefficient of 0.86. Quantitative ROI analyses performed on 38 prostate cancer patients revealed estimated T2 values of 80.4 ± 14.4â ms and 106.8 ± 16.3â ms for tumor and non-tumor regions, respectively. ROI measurements showed a significant difference between tumor and non-tumor regions of the estimated T2 maps (P < 0.001). In the two-timepoints active surveillance cohort, patients defined as progressors exhibited lower estimated T2 values of the tumor ROIs at the second time point compared to the first time point. Additionally, the T2 difference between two time points for progressors was significantly greater than that for non-progressors (P = 0.010). Conclusion: A deep learning method was developed to estimate prostate T2 maps retrospectively from clinically acquired T1- and T2-weighted images, which has the potential to improve prostate cancer diagnosis and characterization without requiring extra scans.