RESUMO
Deep learning (DL) has emerged as a leading approach in accelerating MRI. It employs deep neural networks to extract knowledge from available datasets and then applies the trained networks to reconstruct accurate images from limited measurements. Unlike natural image restoration problems, MRI involves physics-based imaging processes, unique data properties, and diverse imaging tasks. This domain knowledge needs to be integrated with data-driven approaches. Our review will introduce the significant challenges faced by such knowledge-driven DL approaches in the context of fast MRI along with several notable solutions, which include learning neural networks and addressing different imaging application scenarios. The traits and trends of these techniques have also been given which have shifted from supervised learning to semi-supervised learning, and finally, to unsupervised learning methods. In addition, MR vendors' choices of DL reconstruction have been provided along with some discussions on open questions and future directions, which are critical for the reliable imaging systems.
Assuntos
Algoritmos , Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Imageamento por Ressonância Magnética/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina Supervisionado , Encéfalo/diagnóstico por imagemRESUMO
Heterogeneous data captured by different scanning devices and imaging protocols can affect the generalization performance of the deep learning magnetic resonance (MR) reconstruction model. While a centralized training model is effective in mitigating this problem, it raises concerns about privacy protection. Federated learning is a distributed training paradigm that can utilize multi-institutional data for collaborative training without sharing data. However, existing federated learning MR image reconstruction methods rely on models designed manually by experts, which are complex and computationally expensive, suffering from performance degradation when facing heterogeneous data distributions. In addition, these methods give inadequate consideration to fairness issues, namely ensuring that the model's training does not introduce bias towards any specific dataset's distribution. To this end, this paper proposes a generalizable federated neural architecture search framework for accelerating MR imaging (GAutoMRI). Specifically, automatic neural architecture search is investigated for effective and efficient neural network representation learning of MR images from different centers. Furthermore, we design a fairness adjustment approach that can enable the model to learn features fairly from inconsistent distributions of different devices and centers, and thus facilitate the model to generalize well to the unseen center. Extensive experiments show that our proposed GAutoMRI has better performances and generalization ability compared with seven state-of-the-art federated learning methods. Moreover, the GAutoMRI model is significantly more lightweight, making it an efficient choice for MR image reconstruction tasks. The code will be made available at https://github.com/ternencewu123/GAutoMRI.
RESUMO
Deep learning-based methods have achieved encouraging performances in the field of Magnetic Resonance (MR) image reconstruction. Nevertheless, building powerful and robust deep learning models requires collecting large and diverse datasets from multiple centers. This raises concerns about ethics and data privacy. Recently, federated learning has emerged as a promising solution, enabling the utilization of multi-center data without the need for data transfer between institutions. Despite its potential, existing federated learning methods face challenges due to the high heterogeneity of data from different centers. Aggregation methods based on simple averaging, which are commonly used to combine the client's information, have shown limited reconstruction and generalization capabilities. In this paper, we propose a Model-based Federated learning framework (ModFed) to address these challenges. ModFed has three major contributions: (1) Different from existing data-driven federated learning methods, ModFed designs attention-assisted model-based neural networks that can alleviate the need for large amounts of data on each client; (2) To address the data heterogeneity issue, ModFed proposes an adaptive dynamic aggregation scheme, which can improve the generalization capability and robustness of the trained neural network models; (3) ModFed incorporates a spatial Laplacian attention mechanism and a personalized client-side loss regularization to capture the detailed information for accurate image reconstruction. The effectiveness of the proposed ModFed is evaluated on three in-vivo datasets. Experimental results show that when compared to six existing state-of-the-art federated learning approaches, ModFed achieves better MR image reconstruction performance with increased generalization capability. Codes will be made available at https://github.com/ternencewu123/ModFed.
Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Imageamento por Ressonância Magnética/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Encéfalo/diagnóstico por imagem , Redes Neurais de Computação , AlgoritmosRESUMO
With the successful application of deep learning to magnetic resonance (MR) imaging, parallel imaging techniques based on neural networks have attracted wide attention. However, in the absence of high-quality, fully sampled datasets for training, the performance of these methods is limited. And the interpretability of models is not strong enough. To tackle this issue, this paper proposes a Physics-bAsed unsupeRvised Contrastive rEpresentation Learning (PARCEL) method to speed up parallel MR imaging. Specifically, PARCEL has a parallel framework to contrastively learn two branches of model-based unrolling networks from augmented undersampled multi-coil k-space data. A sophisticated co-training loss with three essential components has been designed to guide the two networks in capturing the inherent features and representations for MR images. And the final MR image is reconstructed with the trained contrastive networks. PARCEL was evaluated on two vivo datasets and compared to five state-of-the-art methods. The results show that PARCEL is able to learn essential representations for accurate MR reconstruction without relying on fully sampled datasets. The code will be made available at https://github.com/ternencewu123/PARCEL.
RESUMO
Lately, deep learning technology has been extensively investigated for accelerating dynamic magnetic resonance (MR) imaging, with encouraging progresses achieved. However, without fully sampled reference data for training, the current approaches may have limited abilities in recovering fine details or structures. To address this challenge, this paper proposes a self-supervised collaborative learning framework (SelfCoLearn) for accurate dynamic MR image reconstruction from undersampled k-space data directly. The proposed SelfCoLearn is equipped with three important components, namely, dual-network collaborative learning, reunderampling data augmentation and a special-designed co-training loss. The framework is flexible and can be integrated into various model-based iterative un-rolled networks. The proposed method has been evaluated on an in vivo dataset and was compared to four state-of-the-art methods. The results show that the proposed method possesses strong capabilities in capturing essential and inherent representations for direct reconstructions from the undersampled k-space data and thus enables high-quality and fast dynamic MR imaging.