RESUMEN
PURPOSE: To develop a novel deep learning-based method inheriting the advantages of data distribution prior and end-to-end training for accelerating MRI. METHODS: Langevin dynamics is used to formulate image reconstruction with data distribution before facilitate image reconstruction. The data distribution prior is learned implicitly through the end-to-end adversarial training to mitigate the hyper-parameter selection and shorten the testing time compared to traditional probabilistic reconstruction. By seamlessly integrating the deep equilibrium model, the iteration of Langevin dynamics culminates in convergence to a fix-point, ensuring the stability of the learned distribution. RESULTS: The feasibility of the proposed method is evaluated on the brain and knee datasets. Retrospective results with uniform and random masks show that the proposed method demonstrates superior performance both quantitatively and qualitatively than the state-of-the-art. CONCLUSION: The proposed method incorporating Langevin dynamics with end-to-end adversarial training facilitates efficient and robust reconstruction for MRI. Empirical evaluations conducted on brain and knee datasets compellingly demonstrate the superior performance of the proposed method in terms of artifact removing and detail preserving.
Asunto(s)
Algoritmos , Encéfalo , Procesamiento de Imagen Asistido por Computador , Rodilla , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Rodilla/diagnóstico por imagen , Aprendizaje Profundo , Estudios Retrospectivos , ArtefactosRESUMEN
BACKGROUND: Deep learning methods driven by the low-rank regularization have achieved attractive performance in dynamic magnetic resonance (MR) imaging. The effectiveness of existing methods lies mainly in their ability to capture interframe relationships using network modules, which are lack interpretability. PURPOSE: This study aims to design an interpretable methodology for modeling interframe relationships using convolutiona networks, namely Annihilation-Net and use it for accelerating dynamic MRI. METHODS: Based on the equivalence between Hankel matrix product and convolution, we utilize convolutional networks to learn the null space transform for characterizing low-rankness. We employ low-rankness to represent interframe correlations in dynamic MR imaging, while combining with sparse constraints in the compressed sensing framework. The corresponding optimization problem is solved in an iterative form with the semi-quadratic splitting method (HQS). The iterative steps are unrolled into a network, dubbed Annihilation-Net. All the regularization parameters and null space transforms are set as learnable in the Annihilation-Net. RESULTS: Experiments on the cardiac cine dataset show that the proposed model outperforms other competing methods both quantitatively and qualitatively. The training set and test set have 800 and 118 images, respectively. CONCLUSIONS: The proposed Annihilation-Net improves the reconstruction quality of accelerated dynamic MRI with better interpretability.
Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , CorazónRESUMEN
Diffusion models with continuous stochastic differential equations (SDEs) have shown superior performances in image generation. It can serve as a deep generative prior to solving the inverse problem in magnetic resonance (MR) reconstruction. However, low-frequency regions of k -space data are typically fully sampled in fast MR imaging, while existing diffusion models are performed throughout the entire image or k -space, inevitably introducing uncertainty in the reconstruction of low-frequency regions. Additionally, existing diffusion models often demand substantial iterations to converge, resulting in time-consuming reconstructions. To address these challenges, we propose a novel SDE tailored specifically for MR reconstruction with the diffusion process in high-frequency space (referred to as HFS-SDE). This approach ensures determinism in the fully sampled low-frequency regions and accelerates the sampling procedure of reverse diffusion. Experiments conducted on the publicly available fastMRI dataset demonstrate that the proposed HFS-SDE method outperforms traditional parallel imaging methods, supervised deep learning, and existing diffusion models in terms of reconstruction accuracy and stability. The fast convergence properties are also confirmed through theoretical and experimental validation. Our code and weights are available at https://github.com/Aboriginer/HFS-SDE.
Asunto(s)
Algoritmos , Encéfalo , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Imagen de Difusión por Resonancia Magnética/métodosRESUMEN
Accuratedetection and segmentation of brain tumors is critical for medical diagnosis. However, current supervised learning methods require extensively annotated images and the state-of-the-art generative models used in unsupervised methods often have limitations in covering the whole data distribution. In this paper, we propose a novel framework Two-Stage Generative Model (TSGM) that combines Cycle Generative Adversarial Network (CycleGAN) and Variance Exploding stochastic differential equation using joint probability (VE-JP) to improve brain tumor detection and segmentation. The CycleGAN is trained on unpaired data to generate abnormal images from healthy images as data prior. Then VE-JP is implemented to reconstruct healthy images using synthetic paired abnormal images as a guide, which alters only pathological regions but not regions of healthy. Notably, our method directly learned the joint probability distribution for conditional generation. The residual between input and reconstructed images suggests the abnormalities and a thresholding method is subsequently applied to obtain segmentation results. Furthermore, the multimodal results are weighted with different weights to improve the segmentation accuracy further. We validated our method on three datasets, and compared with other unsupervised methods for anomaly detection and segmentation. The DSC score of 0.8590 in BraTs2020 dataset, 0.6226 in ITCS dataset and 0.7403 in In-house dataset show that our method achieves better segmentation performance and has better generalization.
Asunto(s)
Algoritmos , Neoplasias Encefálicas , Interpretación de Imagen Asistida por Computador , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Encéfalo/diagnóstico por imagen , Redes Neurales de la Computación , Imagen por Resonancia Magnética/métodosRESUMEN
Diffusion models have emerged as a leading methodology for image generation and have proven successful in the realm of magnetic resonance imaging (MRI) reconstruction. However, existing reconstruction methods based on diffusion models are primarily formulated in the image domain, making the reconstruction quality susceptible to inaccuracies in coil sensitivity maps (CSMs). k-space interpolation methods can effectively address this issue but conventional diffusion models are not readily applicable in k-space interpolation. To overcome this challenge, we introduce a novel approach called SPIRiT-Diffusion, which is a diffusion model for k-space interpolation inspired by the iterative self-consistent SPIRiT method. Specifically, we utilize the iterative solver of the self-consistent term (i.e., k-space physical prior) in SPIRiT to formulate a novel stochastic differential equation (SDE) governing the diffusion process. Subsequently, k-space data can be interpolated by executing the diffusion process. This innovative approach highlights the optimization model's role in designing the SDE in diffusion models, enabling the diffusion process to align closely with the physics inherent in the optimization model-a concept referred to as model-driven diffusion. We evaluated the proposed SPIRiT-Diffusion method using a 3D joint intracranial and carotid vessel wall imaging dataset. The results convincingly demonstrate its superiority over image-domain reconstruction methods, achieving high reconstruction quality even at a substantial acceleration rate of 10. Our code are available at https://github.com/zhyjSIAT/SPIRiT-Diffusion.
RESUMEN
Objective.In Magnetic Resonance (MR) parallel imaging with virtual channel-expanded Wave encoding, limitations are imposed on the ability to comprehensively and accurately characterize the background phase. These limitations are primarily attributed to the calibration process relying solely on center low-frequency Auto-Calibration Signals (ACS) data for calibration.Approach.To tackle the challenge of accurately estimating the background phase in wave encoding, a novel deep neural network model guided by deep phase priors is proposed with integrated virtual conjugate coil (VCC) extension. Concretely, within the proposed framework, the background phase is implicitly characterized by employing a carefully designed decoder convolutional neural network, leveraging the inherent characteristics of phase smoothness and compact support in the transformed domain. Furthermore, the proposed model with wave encoding benefits from additional priors, which incorporate transmission sparsity of the latent image and coil sensitivity smoothness.Main results.Ablation experiments were conducted to ascertain the proposed method's capability to implicitly represent CSM and the background phase. Subsequently, the superiority of the proposed method is demonstrated through confidence comparisons with competing methods, employing 4-fold and 5-fold acceleration experiments. In achieving 4-fold and 5-fold acceleration, the optimal quantitative metrics (PSNR/SSIM/NMSE) are 44.1359 dB/0.9863/0.0008 (4-fold) and 41.2074/0.9846/0.0017 (5-fold), respectively. Furthermore, the generalizability of the proposed method is further validated by conducting acceleration experiments with T1, T2, T2*, and various undersampling patterns. In addition, the DPP delivered much better performance than the conventional methods by exploring accelerated phase-sensitive SWI imaging. In SWI accelerated imaging, it also surpasses the optimal competing method in terms of (PSNR/SSIM/NMSE) with 0.096%/0.009%/0.0017%.Significance.The proposed method enables precise characterization of the background phase in the integrated VCC and wave encoding framework, supported via theoretical analysis and empirical findings. Our code is available at:https://github.com/sober235/DPP.
Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Humanos , Aprendizaje ProfundoRESUMEN
Objective. Positron Emission Tomography and Magnetic Resonance Imaging (PET-MRI) systems can obtain functional and anatomical scans. But PET suffers from a low signal-to-noise ratio, while MRI are time-consuming. To address time-consuming, an effective strategy involves reducing k-space data collection, albeit at the cost of lowering image quality. This study aims to leverage the inherent complementarity within PET-MRI data to enhance the image quality of PET-MRI.Approach. A novel PET-MRI joint reconstruction model, termed MC-Diffusion, is proposed in the Bayesian framework. The joint reconstruction problem is transformed into a joint regularization problem, where data fidelity terms of PET and MRI are expressed independently. The regular term, the derivative of the logarithm of the joint probability distribution of PET and MRI, employs a joint score-based diffusion model for learning. The diffusion model involves the forward diffusion process and the reverse diffusion process. The forward diffusion process adds noise to transform a complex joint data distribution into a known joint prior distribution for PET and MRI simultaneously, resembling a denoiser. The reverse diffusion process removes noise using a denoiser to revert the joint prior distribution to the original joint data distribution, effectively utilizing joint probability distribution to describe the correlations of PET and MRI for improved quality of joint reconstruction.Main results. Qualitative and quantitative improvements are observed with the MC-Diffusion model. Comparative analysis against LPLS and Joint ISAT-net on the ADNI dataset demonstrates superior performance by exploiting complementary information between PET and MRI. The MC-Diffusion model effectively enhances the quality of PET and MRI images.Significance. This study employs the MC-Diffusion model to enhance the quality of PET-MRI images by integrating the fundamental principles of PET and MRI modalities and leveraging their inherent complementarity. Furthermore, utilizing the diffusion model to learn the joint probability distribution of PET and MRI, thereby elucidating their latent correlation, facilitates a more profound comprehension of the priors obtained through deep learning, contrasting with black-box prior or artificially constructed structural similarities.
Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Tomografía de Emisión de Positrones , Procesamiento de Imagen Asistido por Computador/métodos , Humanos , Difusión , Imagen Multimodal , Relación Señal-Ruido , Teorema de Bayes , Encéfalo/diagnóstico por imagenRESUMEN
BACKGROUND: Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) stand as pivotal diagnostic tools for brain disorders, offering the potential for mutually enriching disease diagnostic perspectives. However, the costs associated with PET scans and the inherent radioactivity have limited the widespread application of PET. Furthermore, it is noteworthy to highlight the promising potential of high-field and ultra-high-field neuroimaging in cognitive neuroscience research and clinical practice. With the enhancement of MRI resolution, a related question arises: can high-resolution MRI improve the quality of PET images? PURPOSE: This study aims to enhance the quality of synthesized PET images by leveraging the superior resolution capabilities provided by high-field and ultra-high-field MRI. METHODS: From a statistical perspective, the joint probability distribution is considered the most direct and fundamental approach for representing the correlation between PET and MRI. In this study, we proposed a novel model, the joint diffusion attention model, namely, the joint diffusion attention model (JDAM), which primarily focuses on learning information about the joint probability distribution. JDAM consists of two primary processes: the diffusion process and the sampling process. During the diffusion process, PET gradually transforms into a Gaussian noise distribution by adding Gaussian noise, while MRI remains fixed. The central objective of the diffusion process is to learn the gradient of the logarithm of the joint probability distribution between MRI and noise PET. The sampling process operates as a predictor-corrector. The predictor initiates a reverse diffusion process, and the corrector applies Langevin dynamics. RESULTS: Experimental results from the publicly available Alzheimer's Disease Neuroimaging Initiative dataset highlight the effectiveness of the proposed model compared to state-of-the-art (SOTA) models such as Pix2pix and CycleGAN. Significantly, synthetic PET images guided by ultra-high-field MRI exhibit marked improvements in signal-to-noise characteristics when contrasted with those generated from high-field MRI data. These results have been endorsed by medical experts, who consider the PET images synthesized through JDAM to possess scientific merit. This endorsement is based on their symmetrical features and precise representation of regions displaying hypometabolism, a hallmark of Alzheimer's disease. CONCLUSIONS: This study establishes the feasibility of generating PET images from MRI. Synthesis of PET by JDAM significantly enhances image quality compared to SOTA models.
Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Tomografía de Emisión de Positrones , Procesamiento de Imagen Asistido por Computador/métodos , Humanos , Difusión , Modelos Teóricos , Encéfalo/diagnóstico por imagen , Relación Señal-RuidoRESUMEN
Recently, diffusion models have shown considerable promise for MRI reconstruction. However, extensive experimentation has revealed that these models are prone to generating artifacts due to the inherent randomness involved in generating images from pure noise. To achieve more controlled image reconstruction, we reexamine the concept of interpolatable physical priors in k-space data, focusing specifically on the interpolation of high-frequency (HF) k-space data from low-frequency (LF) k-space data. Broadly, this insight drives a shift in the generation paradigm from random noise to a more deterministic approach grounded in the existing LF k-space data. Building on this, we first establish a relationship between the interpolation of HF k-space data from LF k-space data and the reverse heat diffusion process, providing a fundamental framework for designing diffusion models that generate missing HF data. To further improve reconstruction accuracy, we integrate a traditional physics-informed k-space interpolation model into our diffusion framework as a data fidelity term. Experimental validation using publicly available datasets demonstrates that our approach significantly surpasses traditional k-space interpolation methods, deep learning-based k-space interpolation techniques, and conventional diffusion models, particularly in HF regions. Finally, we assess the generalization performance of our model across various out-of-distribution datasets. Our code are available at https://github.com/ZhuoxuCui/Heat-Diffusion.
Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Procesamiento de Imagen Asistido por Computador/métodos , Humanos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Algoritmos , CalorRESUMEN
BACKGROUND: The cumulative burden of hypertrophic cardiomyopathy (HCM) is significant, with a noteworthy percentage (10%-15%) of patients with HCM per year experiencing major adverse cardiovascular events (MACEs). A current risk stratification scheme for HCM had only limited accuracy in predicting sudden cardiac death (SCD) and failed to account for a broader spectrum of adverse cardiovascular events and cardiac magnetic resonance (CMR) parameters. OBJECTIVES: This study sought to develop and evaluate a machine learning (ML) framework that integrates CMR imaging and clinical characteristics to predict MACEs in patients with HCM. METHODS: A total of 758 patients with HCM (67% male; age 49 ± 14 years) who were admitted between 2010 and 2017 from 4 medical centers were included. The ML model was built on the internal discovery cohort (533 patients with HCM, admitted to Fuwai Hospital, Beijing, China) by using the light gradient-boosting machine and internally evaluated using cross-validation. The external test cohort consisted of 225 patients with HCM from 3 medical centers. A total of 14 CMR imaging features (strain and late gadolinium enhancement [LGE]) and 23 clinical variables were evaluated and used to inform the ML model. MACEs included a composite of arrhythmic events, SCD, heart failure, and atrial fibrillation-related stroke. RESULTS: MACEs occurred in 191 (25%) patients over a median follow-up period of 109.0 months (Q1-Q3: 73.0-118.8 months). Our ML model achieved areas under the curve (AUCs) of 0.830 and 0.812 (internally and externally, respectively). The model outperformed the classic HCM Risk-SCD model, with significant improvement (P < 0.001) of 22.7% in the AUC. Using the cubic spline analysis, the study showed that the extent of LGE and the impairment of global radial strain (GRS) and global circumferential strain (GCS) were nonlinearly correlated with MACEs: an elevated risk of adverse cardiovascular events was observed when these parameters reached the high enough second tertiles (11.6% for LGE, 25.8% for GRS, -17.3% for GCS). CONCLUSIONS: ML-empowered risk stratification using CMR and clinical features enabled accurate MACE prediction beyond the classic HCM Risk-SCD model. In addition, the nonlinear correlation between CMR features (LGE and left ventricular pressure gradient) and MACEs uncovered in this study provides valuable insights for the clinical assessment and management of HCM.
Asunto(s)
Cardiomiopatía Hipertrófica , Aprendizaje Automático , Imagen por Resonancia Cinemagnética , Valor Predictivo de las Pruebas , Humanos , Cardiomiopatía Hipertrófica/diagnóstico por imagen , Cardiomiopatía Hipertrófica/fisiopatología , Cardiomiopatía Hipertrófica/mortalidad , Cardiomiopatía Hipertrófica/complicaciones , Masculino , Persona de Mediana Edad , Femenino , Adulto , Medición de Riesgo , Pronóstico , Factores de Riesgo , Estudios Retrospectivos , China/epidemiología , Dinámicas no Lineales , Reproducibilidad de los Resultados , Muerte Súbita Cardíaca/etiología , Factores de Tiempo , Técnicas de Apoyo para la Decisión , AncianoRESUMEN
Magnetic resonance (MR) image reconstruction and super-resolution are two prominent techniques to restore high-quality images from undersampled or low-resolution k-space data to accelerate MR imaging. Combining undersampled and low-resolution acquisition can further improve the acceleration factor. Existing methods often treat the techniques of image reconstruction and super-resolution separately or combine them sequentially for image recovery, which can result in error propagation and suboptimal results. In this work, we propose a novel framework for joint image reconstruction and super-resolution, aiming to efficiently image recovery and enable fast imaging. Specifically, we designed a framework with a reconstruction module and a super-resolution module to formulate multi-task learning. The reconstruction module utilizes a model-based optimization approach, ensuring data fidelity with the acquired k-space data. Moreover, a deep spatial feature transform is employed to enhance the information transition between the two modules, facilitating better integration of image reconstruction and super-resolution. Experimental evaluations on two datasets demonstrate that our proposed method can provide superior performance both quantitatively and qualitatively.
RESUMEN
Recently, untrained neural networks (UNNs) have shown satisfactory performances for MR image reconstruction on random sampling trajectories without using additional full-sampled training data. However, the existing UNN-based approaches lack the modeling of physical priors, resulting in poor performance in some common scenarios (e.g., partial Fourier (PF), regular sampling, etc.) and the lack of theoretical guarantees for reconstruction accuracy. To bridge this gap, we propose a safeguarded k-space interpolation method for MRI using a specially designed UNN with a tripled architecture driven by three physical priors of the MR images (or k-space data), including transform sparsity, coil sensitivity smoothness, and phase smoothness. We also prove that the proposed method guarantees tight bounds for interpolated k-space data accuracy. Finally, ablation experiments show that the proposed method can characterize the physical priors of MR images well. Additionally, experiments show that the proposed method consistently outperforms traditional parallel imaging methods and existing UNNs, and is even competitive against supervised-trained deep learning methods in PF and regular undersampling reconstruction.
Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Imagen por Resonancia Magnética/métodosRESUMEN
BACKGROUND: Magnetic resonance parameter mapping (MRPM) plays an important role in clinical applications and biomedical researches. However, the acceleration of MRPM remains a major challenge for achieving further improvements. PURPOSE: In this work, a new undersampled k-space based joint multi-contrast image reconstruction approach named CC-IC-LMEN is proposed for accelerating MR T1rho mapping. METHODS: The reconstruction formulation of the proposed CC-IC-LMEN method imposes a blockwise low-rank assumption on the characteristic-image series (c-p space) and utilizes infimal convolution (IC) to exploit and balance the generalized low-rank properties in low-and high-order c-p spaces, thereby improving the accuracy. In addition, matrix elastic-net (MEN) regularization based on the nuclear and Frobenius norms is incorporated to obtain stable and exact solutions in cases with large accelerations and noisy observations. This formulation results in a minimization problem, that can be effectively solved using a numerical algorithm based on the alternating direction method of multipliers (ADMM). Finally, T1rho maps are then generated according to the reconstructed images using nonlinear least-squares (NLSQ) curve fitting with an established relaxometry model. RESULTS: The relative l2 -norm error (RLNE) and structural similarity (SSIM) in the regions of interest (ROI) show that the CC-IC-LMEN approach is more accurate than other competing methods even in situations with heavy undersampling or noisy observation. CONCLUSIONS: Our proposed CC-IC-LMEN method provides accurate and robust solutions for accelerated MR T1rho mapping.
Asunto(s)
Algoritmos , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Fantasmas de Imagen , Procesamiento de Imagen Asistido por Computador/métodos , EncéfaloRESUMEN
BACKGROUND: Wave gradient encoding can adequately utilize coil sensitivity profiles to facilitate higher accelerations in parallel magnetic resonance imaging (pMRI). However, there are limitations in mainstream pMRI and a few deep learning (DL) methods for recovering missing data under wave encoding framework: the former is prone to introduce errors from the auto-calibration signals (ACS) signal acquisition and is time-consuming, while the latter requires a large amount of training data. PURPOSE: To tackle the above issues, an untrained neural network (UNN) model incorporating wave-encoded physical properties and deep generative model, named WDGM, was proposed with additional ACS- and training data-free. METHODS: Generally, the proposed method can provide powerful missing data interpolation capability using the wave physical encoding framework and designed UNN to characterize the MR image (k-space data) priors. Specifically, the MRI reconstruction combining physical wave encoding and elaborate UNN is modeled as a generalized minimization problem. The designation of UNN is driven by the coil sensitivity maps (CSM) smoothness and k-space linear predictability. And then, the iterative paradigm to recover the full k-space signal is determined by the projected gradient descent, and the complex computation is unrolled to the network with optimized parameters by the optimizer. Simulated wave encoding and in vivo experiments are exploited to demonstrate the feasibility of the proposed method. The best quantitative metrics RMSE/SSIM/PSNR of 0.0413, 0.9514, and 37.4862 gave competitive results in all experiments with at least six-fold acceleration, respectively. RESULTS: In vivo experiments of human brains and knees showed that the proposed method can achieve comparable reconstruction quality and even has superiority relative to the comparison, especially at a high resolution of 0.67 mm and fewer ACS. In addition, the proposed method has a higher computational efficiency achieving a computation time of 9.6 s/per slice. CONCLUSIONS: The model proposed in this work addresses two limitations of MRI reconstruction in the wave encoding framework. The first is to eliminate the need for ACS signal acquisition to perform the time-consuming calibration process and to avoid errors such as motion during the acquisition procedure. Furthermore, the proposed method has clinical application friendly without the need to prepare large training datasets, which is difficult in the clinical. All results of the proposed method demonstrate more confidence in both quantitative and qualitative metrics. In addition, the proposed method can achieve higher computational efficiency.
Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Movimiento (Física) , AlgoritmosRESUMEN
Quantitative magnetic resonance (MR) [Formula: see text] mapping is a promising approach for characterizing intrinsic tissue-dependent information. However, long scan time significantly hinders its widespread applications. Recently, low-rank tensor models have been employed and demonstrated exemplary performance in accelerating MR [Formula: see text] mapping. This study proposes a novel method that uses spatial patch-based and parametric group-based low-rank tensors simultaneously (SMART) to reconstruct images from highly undersampled k-space data. The spatial patch-based low-rank tensor exploits the high local and nonlocal redundancies and similarities between the contrast images in [Formula: see text] mapping. The parametric group-based low-rank tensor, which integrates similar exponential behavior of the image signals, is jointly used to enforce multidimensional low-rankness in the reconstruction process. In vivo brain datasets were used to demonstrate the validity of the proposed method. Experimental results demonstrated that the proposed method achieves 11.7-fold and 13.21-fold accelerations in two-dimensional and three-dimensional acquisitions, respectively, with more accurate reconstructed images and maps than several state-of-the-art methods. Prospective reconstruction results further demonstrate the capability of the SMART method in accelerating MR [Formula: see text] imaging.
Asunto(s)
Algoritmos , Imagen por Resonancia Magnética , Estudios Prospectivos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Espectroscopía de Resonancia Magnética , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
In recent times, model-driven deep learning has evolved an iterative algorithm into a cascade network by replacing the regularizer's first-order information, such as the (sub)gradient or proximal operator, with a network module. This approach offers greater explainability and predictability compared to typical data-driven networks. However, in theory, there is no assurance that a functional regularizer exists whose first-order information matches the substituted network module. This implies that the unrolled network output may not align with the regularization models. Furthermore, there are few established theories that guarantee global convergence and robustness (regularity) of unrolled networks under practical assumptions. To address this gap, we propose a safeguarded methodology for network unrolling. Specifically, for parallel MR imaging, we unroll a zeroth-order algorithm, where the network module serves as a regularizer itself, allowing the network output to be covered by a regularization model. Additionally, inspired by deep equilibrium models, we conduct the unrolled network before backpropagation to converge to a fixed point and then demonstrate that it can tightly approximate the actual MR image. We also prove that the proposed network is robust against noisy interferences if the measurement data contain noise. Finally, numerical experiments indicate that the proposed network consistently outperforms state-of-the-art MRI reconstruction methods, including traditional regularization and unrolled deep learning techniques.
Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodosRESUMEN
Supervised deep learning (SDL) methodology holds promise for accelerated magnetic resonance imaging (AMRI) but is hampered by the reliance on extensive training data. Some self-supervised frameworks, such as deep image prior (DIP), have emerged, eliminating the explicit training procedure but often struggling to remove noise and artifacts under significant degradation. This work introduces a novel self-supervised accelerated parallel MRI approach called PEARL, leveraging a multiple-stream joint deep decoder with two cross-fusion schemes to accurately reconstruct one or more target images from compressively sampled k-space. Each stream comprises cascaded cross-fusion sub-block networks (SBNs) that sequentially perform combined upsampling, 2D convolution, joint attention, ReLU activation and batch normalization (BN). Among them, combined upsampling and joint attention facilitate mutual learning between multiple-stream networks by integrating multi-parameter priors in both additive and multiplicative manners. Long-range unified skip connections within SBNs ensure effective information propagation between distant cross-fusion layers. Additionally, incorporating dual-normalized edge-orientation similarity regularization into the training loss enhances detail reconstruction and prevents overfitting. Experimental results consistently demonstrate that PEARL outperforms the existing state-of-the-art (SOTA) self-supervised AMRI technologies in various MRI cases. Notably, 5-fold â¼ 6-fold accelerated acquisition yields a 1 % â¼ 2 % improvement in SSIM ROI and a 3 % â¼ 6 % improvement in PSNR ROI, along with a significant 15 % â¼ 20 % reduction in RLNE ROI.
RESUMEN
Objective. The plug-and-play prior (P3) can be flexibly coupled with multiple iterative optimizations, which has been successfully applied to the inverse problems of medical imaging. In this work, for accelerated cardiac cine magnetic resonance imaging (CC-MRI), the Spatiotemporal corrElAtion-based hyBrid plUg-and-play priorS (SEABUS) integrating a local P3and a nonlocal P3are introduced.Approach. Specifically, the local P3enforces pixelwise edge-orientation consistency by conducting reference frame guided multiscale orientation projection on a subset containing a few adjacent frames; the nonlocal P3constrains the cubewise anatomic-structure similarity by performing cube matching and 4D filtering (CM4D) on all frames. By using effectively a composite splitting algorithm (CSA), SEABUS is incorporated into a fast iterative shrinkage-thresholding algorithm and a new accelerated CC-MRI approach named SEABUS-FCSA is proposed.Main results. The experiment and algorithm analysis demonstrate the efficiency and potential of the proposed SEABUS-FCSA approach, which has the best performance in terms of reducing aliasing artifacts and capturing dynamic features in comparison with several state-of-the-art accelerated CC-MRI technologies.Significance. Our approach aims to propose a new hybrid P3based iterative algorithm, which is not only used to improve the quality of accelerated cardiac cine imaging but also extend the FCSA methodology.
Asunto(s)
Imagen por Resonancia Cinemagnética , Imagen por Resonancia Magnética , Imagen por Resonancia Cinemagnética/métodos , Imagen por Resonancia Magnética/métodos , Artefactos , Corazón/diagnóstico por imagen , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
Magnetic resonance (MR) image reconstruction from undersampled k-space data can be formulated as a minimization problem involving data consistency and image prior. Existing deep learning (DL)-based methods for MR reconstruction employ deep networks to exploit the prior information and integrate the prior knowledge into the reconstruction under the explicit constraint of data consistency, without considering the real distribution of the noise. In this work, we propose a new DL-based approach termed Learned DC that implicitly learns the data consistency with deep networks, corresponding to the actual probability distribution of system noise. The data consistency term and the prior knowledge are both embedded in the weights of the networks, which provides an utterly implicit manner of learning reconstruction model. We evaluated the proposed approach with highly undersampled dynamic data, including the dynamic cardiac cine data with up to 24-fold acceleration and dynamic rectum data with the acceleration factor equal to the number of phases. Experimental results demonstrate the superior performance of the Learned DC both quantitatively and qualitatively than the state-of-the-art.
Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Algoritmos , Corazón/diagnóstico por imagen , ProbabilidadRESUMEN
BACKGROUND: Magnetic resonance (MR) quantitative T1ρ imaging has been increasingly used to detect the early stages of osteoarthritis. The small volume and curved surface of articular cartilage necessitate imaging with high in-plane resolution and thin slices for accurate T1ρ measurement. Compared with 2D T1ρ mapping, 3D T1ρ mapping is free from artifacts caused by slice cross-talk and has a thinner slice thickness and full volume coverage. However, this technique needs to acquire multiple T1ρ-weighted images with different spin-lock times, which results in a very long scan duration. It is highly expected that the scan time can be reduced in 3D T1ρ mapping without compromising the T1ρ quantification accuracy and precision. METHODS: To accelerate the acquisition of 3D T1ρ mapping without compromising the T1ρ quantification accuracy and precision, a signal-compensated robust tensor principal component analysis method was proposed in this paper. The 3D T1ρ-weighted images compensated at different spin-lock times were decomposed as a low-rank high-order tensor plus a sparse component. Poisson-disk random undersampling patterns were applied to k-space data in the phase- and partition-encoding directions in both retrospective and prospective experiments. Five volunteers were involved in this study. The fully sampled k-space data acquired from 3 volunteers were retrospectively undersampled at R=5.2, 7.7, and 9.7, respectively. Reference values were obtained from the fully sampled data. Prospectively undersampled data for R=5 and R=7 were acquired from 2 volunteers. Bland-Altman analyses were used to assess the agreement between the accelerated and reference T1ρ measurements. The reconstruction performance was evaluated using the normalized root mean square error and the median of the normalized absolute deviation (MNAD) of the reconstructed T1ρ-weighted images and the corresponding T1ρ maps. RESULTS: T1ρ parameter maps were successfully estimated from T1ρ-weighted images reconstructed using the proposed method for all accelerations. The accelerated T1ρ measurements and reference values were in good agreement for R=5.2 (T1ρ: 40.4±1.4 ms), R=7.7 (T1ρ: 40.4±2.1 ms), and R=9.7 (T1ρ: 40.9±2.2 ms) in the Bland-Altman analyses. The T1ρ parameter maps reconstructed from the prospectively undersampled data also showed promising image quality using the proposed method. CONCLUSIONS: The proposed method achieves the 3D T1ρ mapping of in vivo knee cartilage in eight minutes using a signal-compensated robust tensor principal component analysis method in image reconstruction.