Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 34
1.
Article En | MEDLINE | ID: mdl-38536690

Image fusion plays a key role in a variety of multi-sensor-based vision systems, especially for enhancing visual quality and/or extracting aggregated features for perception. However, most existing methods just consider image fusion as an individual task, thus ignoring its underlying relationship with these downstream vision problems. Furthermore, designing proper fusion architectures often requires huge engineering labor. It also lacks mechanisms to improve the flexibility and generalization ability of current fusion approaches. To mitigate these issues, we establish a Task-guided, Implicit-searched and Meta-initialized (TIM) deep model to address the image fusion problem in a challenging real-world scenario. Specifically, we first propose a constrained strategy to incorporate information from downstream tasks to guide the unsupervised learning process of image fusion. Within this framework, we then design an implicit search scheme to automatically discover compact architectures for our fusion model with high efficiency. In addition, a pretext meta initialization technique is introduced to leverage divergence fusion data to support fast adaptation for different kinds of image fusion tasks. Qualitative and quantitative experimental results on different categories of image fusion problems and related downstream tasks (e.g., visual enhancement and semantic understanding) substantiate the flexibility and effectiveness of our TIM.

2.
Article En | MEDLINE | ID: mdl-38335083

The complexity of learning problems, such as Generative Adversarial Network (GAN) and its variants, multi-task and meta-learning, hyper-parameter learning, and a variety of real-world vision applications, demands a deeper understanding of their underlying coupling mechanisms. Existing approaches often address these problems in isolation, lacking a unified perspective that can reveal commonalities and enable effective solutions. Therefore, in this work, we proposed a new framework, named Learning with Constraint Learning (LwCL), that can holistically examine challenges and provide a unified methodology to tackle all the above-mentioned complex learning and vision problems. Specifically, LwCL is designed as a general hierarchical optimization model that captures the essence of these diverse learning and vision problems. Furthermore, we develop a gradient-response based fast solution strategy to overcome optimization challenges of the LwCL framework. Our proposed framework efficiently addresses a wide range of applications in learning and vision, encompassing three categories and nine different problem types. Extensive experiments on synthetic tasks and real-world applications verify the effectiveness of our approach. The LwCL framework offers a comprehensive solution for tackling complex machine learning and computer vision problems, bridging the gap between theory and practice.

3.
IEEE Trans Image Process ; 32: 6075-6089, 2023.
Article En | MEDLINE | ID: mdl-37922167

In recent years, there has been a growing interest in combining learnable modules with numerical optimization to solve low-level vision tasks. However, most existing approaches focus on designing specialized schemes to generate image/feature propagation. There is a lack of unified consideration to construct propagative modules, provide theoretical analysis tools, and design effective learning mechanisms. To mitigate the above issues, this paper proposes a unified optimization-inspired learning framework to aggregate Generative, Discriminative, and Corrective (GDC for short) principles with strong generalization for diverse optimization models. Specifically, by introducing a general energy minimization model and formulating its descent direction from different viewpoints (i.e., in a generative manner, based on the discriminative metric and with optimality-based correction), we construct three propagative modules to effectively solve the optimization models with flexible combinations. We design two control mechanisms that provide the non-trivial theoretical guarantees for both fully- and partially-defined optimization formulations. Under the support of theoretical guarantees, we can introduce diverse architecture augmentation strategies such as normalization and search to ensure stable propagation with convergence and seamlessly integrate the suitable modules into the propagation respectively. Extensive experiments across varied low-level vision tasks validate the efficacy and adaptability of GDC.

4.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 14693-14708, 2023 Dec.
Article En | MEDLINE | ID: mdl-37708018

In recent years, by utilizing optimization techniques to formulate the propagation of deep model, a variety of so-called Optimization-Derived Learning (ODL) approaches have been proposed to address diverse learning and vision tasks. Although having achieved relatively satisfying practical performance, there still exist fundamental issues in existing ODL methods. In particular, current ODL methods tend to consider model constructing and learning as two separate phases, and thus fail to formulate their underlying coupling and depending relationship. In this work, we first establish a new framework, named Hierarchical ODL (HODL), to simultaneously investigate the intrinsic behaviors of optimization-derived model construction and its corresponding learning process. Then we rigorously prove the joint convergence of these two sub-tasks, from the perspectives of both approximation quality and stationary analysis. To our best knowledge, this is the first theoretical guarantee for these two coupled ODL components: optimization and learning. We further demonstrate the flexibility of our framework by applying HODL to challenging learning tasks, which have not been properly addressed by existing ODL methods. Finally, we conduct extensive experiments on both synthetic data and real applications in vision and other learning tasks to verify the theoretical properties and practical performance of HODL in various application scenarios.

5.
IEEE Trans Image Process ; 32: 4880-4892, 2023.
Article En | MEDLINE | ID: mdl-37624710

Deformable image registration plays a critical role in various tasks of medical image analysis. A successful registration algorithm, either derived from conventional energy optimization or deep networks, requires tremendous efforts from computer experts to well design registration energy or to carefully tune network architectures with respect to medical data available for a given registration task/scenario. This paper proposes an automated learning registration algorithm (AutoReg) that cooperatively optimizes both architectures and their corresponding training objectives, enabling non-computer experts to conveniently find off-the-shelf registration algorithms for various registration scenarios. Specifically, we establish a triple-level framework to embrace the searching for both network architectures and objectives with a cooperating optimization. Extensive experiments on multiple volumetric datasets and various registration scenarios demonstrate that AutoReg can automatically learn an optimal deep registration network for given volumes and achieve state-of-the-art performance. The automatically learned network also improves computational efficiency over the mainstream UNet architecture from 0.558 to 0.270 seconds for a volume pair on the same configuration.

6.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 15930-15948, 2023 Dec.
Article En | MEDLINE | ID: mdl-37552592

Gradient-based Bi-Level Optimization (BLO) methods have been widely applied to handle modern learning tasks. However, most existing strategies are theoretically designed based on restrictive assumptions (e.g., convexity of the lower-level sub-problem), and computationally not applicable for high-dimensional tasks. Moreover, there are almost no gradient-based methods able to solve BLO in those challenging scenarios, such as BLO with functional constraints and pessimistic BLO. In this work, by reformulating BLO into approximated single-level problems, we provide a new algorithm, named Bi-level Value-Function-based Sequential Minimization (BVFSM), to address the above issues. Specifically, BVFSM constructs a series of value-function-based approximations, and thus avoids repeated calculations of recurrent gradient and Hessian inverse required by existing approaches, time-consuming especially for high-dimensional tasks. We also extend BVFSM to address BLO with additional functional constraints. More importantly, BVFSM can be used for the challenging pessimistic BLO, which has never been properly solved before. In theory, we prove the asymptotic convergence of BVFSM on these types of BLO, in which the restrictive lower-level convexity assumption is discarded. To our best knowledge, this is the first gradient-based algorithm that can solve different kinds of BLO (e.g., optimistic, pessimistic, and with constraints) with solid convergence guarantees. Extensive experiments verify the theoretical investigations and demonstrate our superiority on various real-world applications.

7.
Food Chem Toxicol ; 178: 113871, 2023 Aug.
Article En | MEDLINE | ID: mdl-37277018

Our research aimed to investigate whether soluble thrombomodulin (sTM) relieved Diquat (DQ)-induced acute kidney injury (AKI) via HMGB1/IκBα/NF-κB signaling pathways. An AKI rat model was constructed using DQ. Pathological changes in renal tissue were detected by HE and Masson staining. Gene expression was determined using qRT-PCR, IHC, and western blotting. Cell activity and apoptosis were analysed using CCK-8 and Flow cytometry, respectively. An abnormal kidney structure was observed in DQ rats. The levels of blood urea nitrogen (BUN), creatinine (CRE), uric acid (UA), oxidative stress, and inflammatory responses in the DQ group increased on the 7th day but decreased on the 14th day, compared with the control group. Additionally, HMGB1, sTM, and NF-kappaB (NF-κB) expression had increased in the DQ group compared with the control group, while the IκKα and IκB-α levels had decreased. In addition, sTM relieved the damaging effects of diquat on renal tubular epithelial cell viability, apoptosis, and the inflammatory response. The levels of HMGB1, TM, and NF-κB mRNA and protein were significantly decreased in the DQ + sTM group compared with the DQ group. These findings indicated that sTM could relieve Diquat-induced AKI through HMGB1/IκBα/NF-κB signaling pathways, which provides a treatment strategy for Diquat-induced AKI.


Acute Kidney Injury , HMGB1 Protein , Rats , Animals , NF-kappa B/genetics , NF-kappa B/metabolism , Diquat , NF-KappaB Inhibitor alpha , HMGB1 Protein/genetics , HMGB1 Protein/metabolism , Thrombomodulin/genetics , Acute Kidney Injury/metabolism , Kidney
8.
IEEE Trans Pattern Anal Mach Intell ; 45(5): 5953-5969, 2023 May.
Article En | MEDLINE | ID: mdl-36215366

Images captured from low-light scenes often suffer from severe degradations, including low visibility, color casts, intensive noises, etc. These factors not only degrade image qualities, but also affect the performance of downstream Low-Light Vision (LLV) applications. A variety of deep networks have been proposed to enhance the visual quality of low-light images. However, they mostly rely on significant architecture engineering and often suffer from the high computational burden. More importantly, it still lacks an efficient paradigm to uniformly handle various tasks in the LLV scenarios. To partially address the above issues, we establish Retinex-inspired Unrolling with Architecture Search (RUAS), a general learning framework, that can address low-light enhancement task, and has the flexibility to handle other challenging downstream vision tasks. Specifically, we first establish a nested optimization formulation, together with an unrolling strategy, to explore underlying principles of a series of LLV tasks. Furthermore, we design a differentiable strategy to cooperatively search specific scene and task architectures for RUAS. Last but not least, we demonstrate how to apply RUAS for both low- and high-level LLV applications (e.g., enhancement, detection and segmentation). Extensive experiments verify the flexibility, effectiveness, and efficiency of RUAS.

9.
IEEE Trans Pattern Anal Mach Intell ; 45(1): 38-57, 2023 Jan.
Article En | MEDLINE | ID: mdl-34982677

In recent years, a variety of gradient-based methods have been developed to solve Bi-Level Optimization (BLO) problems in machine learning and computer vision areas. However, the theoretical correctness and practical effectiveness of these existing approaches always rely on some restrictive conditions (e.g., Lower-Level Singleton, LLS), which could hardly be satisfied in real-world applications. Moreover, previous literature only proves theoretical results based on their specific iteration strategies, thus lack a general recipe to uniformly analyze the convergence behaviors of different gradient-based BLOs. In this work, we formulate BLOs from an optimistic bi-level viewpoint and establish a new gradient-based algorithmic framework, named Bi-level Descent Aggregation (BDA), to partially address the above issues. Specifically, BDA provides a modularized structure to hierarchically aggregate both the upper- and lower-level subproblems to generate our bi-level iterative dynamics. Theoretically, we establish a general convergence analysis template and derive a new proof recipe to investigate the essential theoretical properties of gradient-based BLO methods. Furthermore, this work systematically explores the convergence behavior of BDA in different optimization scenarios, i.e., considering various solution qualities (i.e., global/local/stationary solution) returned from solving approximation subproblems. Extensive experiments justify our theoretical results and demonstrate the superiority of the proposed algorithm for hyper-parameter optimization and meta-learning tasks. Source code is available at https://github.com/vis-opt-group/BDA.

10.
IEEE Trans Image Process ; 31: 4922-4936, 2022.
Article En | MEDLINE | ID: mdl-35849672

Underwater images suffer from severe distortion, which degrades the accuracy of object detection performed in an underwater environment. Existing underwater image enhancement algorithms focus on the restoration of contrast and scene reflection. In practice, the enhanced images may not benefit the effectiveness of detection and even lead to a severe performance drop. In this paper, we propose an object-guided twin adversarial contrastive learning based underwater enhancement method to achieve both visual-friendly and task-orientated enhancement. Concretely, we first develop a bilateral constrained closed-loop adversarial enhancement module, which eases the requirement of paired data with the unsupervised manner and preserves more informative features by coupling with the twin inverse mapping. In addition, to confer the restored images with a more realistic appearance, we also adopt the contrastive cues in the training phase. To narrow the gap between visually-oriented and detection-favorable target images, a task-aware feedback module is embedded in the enhancement process, where the coherent gradient information of the detector is incorporated to guide the enhancement towards the detection-pleasing direction. To validate the performance, we allocate a series of prolific detectors into our framework. Extensive experiments demonstrate that the enhanced results of our method show remarkable amelioration in visual quality, the accuracy of different detectors conducted on our enhanced images has been promoted notably. Moreover, we also conduct a study on semantic segmentation to illustrate how object guidance improves high-level tasks. Code and models are available at https://github.com/Jzy2017/TACL.

11.
IEEE Trans Image Process ; 31: 1190-1203, 2022.
Article En | MEDLINE | ID: mdl-35015638

This paper firstly proposes a convex bilevel optimization paradigm to formulate and optimize popular learning and vision problems in real-world scenarios. Different from conventional approaches, which directly design their iteration schemes based on given problem formulation, we introduce a task-oriented energy as our latent constraint which integrates richer task information. By explicitly re- characterizing the feasibility, we establish an efficient and flexible algorithmic framework to tackle convex models with both shrunken solution space and powerful auxiliary (based on domain knowledge and data distribution of the task). In theory, we present the convergence analysis of our latent feasibility re- characterization based numerical strategy. We also analyze the stability of the theoretical convergence under computational error perturbation. Extensive numerical experiments are conducted to verify our theoretical findings and evaluate the practical performance of our method on different applications.

12.
IEEE Trans Neural Netw Learn Syst ; 33(10): 5666-5680, 2022 Oct.
Article En | MEDLINE | ID: mdl-33929967

Enhancing the quality of low-light (LOL) images plays a very important role in many image processing and multimedia applications. In recent years, a variety of deep learning techniques have been developed to address this challenging task. A typical framework is to simultaneously estimate the illumination and reflectance, but they disregard the scene-level contextual information encapsulated in feature spaces, causing many unfavorable outcomes, e.g., details loss, color unsaturation, and artifacts. To address these issues, we develop a new context-sensitive decomposition network (CSDNet) architecture to exploit the scene-level contextual dependencies on spatial scales. More concretely, we build a two-stream estimation mechanism including reflectance and illumination estimation network. We design a novel context-sensitive decomposition connection to bridge the two-stream mechanism by incorporating the physical principle. The spatially varying illumination guidance is further constructed for achieving the edge-aware smoothness property of the illumination component. According to different training patterns, we construct CSDNet (paired supervision) and context-sensitive decomposition generative adversarial network (CSDGAN) (unpaired supervision) to fully evaluate our designed architecture. We test our method on seven testing benchmarks [including massachusetts institute of technology (MIT)-Adobe FiveK, LOL, ExDark, and naturalness preserved enhancement (NPE)] to conduct plenty of analytical and evaluated experiments. Thanks to our designed context-sensitive decomposition connection, we successfully realized excellent enhanced results (with sufficient details, vivid colors, and few noises), which fully indicates our superiority against existing state-of-the-art approaches. Finally, considering the practical needs for high efficiency, we develop a lightweight CSDNet (named LiteCSDNet) by reducing the number of channels. Furthermore, by sharing an encoder for these two components, we obtain a more lightweight version (SLiteCSDNet for short). SLiteCSDNet just contains 0.0301M parameters but achieves the almost same performance as CSDNet. Code is available at https://github.com/KarelZhang/CSDNet-CSDGAN.

13.
IEEE Trans Neural Netw Learn Syst ; 33(8): 3425-3436, 2022 Aug.
Article En | MEDLINE | ID: mdl-33513118

Enhancing visual quality for underexposed images is an extensively concerning task that plays an important role in various areas of multimedia and computer vision. Most existing methods often fail to generate high-quality results with appropriate luminance and abundant details. To address these issues, we develop a novel framework, integrating both knowledge from physical principles and implicit distributions from data to address underexposed image correction. More concretely, we propose a new perspective to formulate this task as an energy-inspired model with advanced hybrid priors. A propagation procedure navigated by the hybrid priors is well designed for simultaneously propagating the reflectance and illumination toward desired results. We conduct extensive experiments to verify the necessity of integrating both underlying principles (i.e., with knowledge) and distributions (i.e., from data) as navigated deep propagation. Plenty of experimental results of underexposed image correction demonstrate that our proposed method performs favorably against the state-of-the-art methods on both subjective and objective assessments. In addition, we execute the task of face detection to further verify the naturalness and practical value of underexposed image correction. What is more, we apply our method to solve single-image haze removal whose experimental results further demonstrate our superiorities.

14.
IEEE Trans Pattern Anal Mach Intell ; 44(12): 10045-10067, 2022 Dec.
Article En | MEDLINE | ID: mdl-34871167

Bi-Level Optimization (BLO) is originated from the area of economic game theory and then introduced into the optimization community. BLO is able to handle problems with a hierarchical structure, involving two levels of optimization tasks, where one task is nested inside the other. In machine learning and computer vision fields, despite the different motivations and mechanisms, a lot of complex problems, such as hyper-parameter optimization, multi-task and meta learning, neural architecture search, adversarial learning and deep reinforcement learning, actually all contain a series of closely related subproblms. In this paper, we first uniformly express these complex learning and vision problems from the perspective of BLO. Then we construct a best-response-based single-level reformulation and establish a unified algorithmic framework to understand and formulate mainstream gradient-based BLO methodologies, covering aspects ranging from fundamental automatic differentiation schemes to various accelerations, simplifications, extensions and their convergence and complexity properties. Last but not least, we discuss the potentials of our unified BLO framework for designing new algorithms and point out some promising directions for future research. A list of important papers discussed in this survey, corresponding codes, and additional resources on BLOs are publicly available at: https://github.com/vis-opt-group/BLO.

15.
IEEE Trans Image Process ; 31: 239-250, 2022.
Article En | MEDLINE | ID: mdl-34847030

Video deraining is an important issue for outdoor vision systems and has been investigated extensively. However, designing optimal architectures by the aggregating model formation and data distribution is a challenging task for video deraining. In this paper, we develop a model-guided triple-level optimization framework to deduce network architecture with cooperating optimization and auto-searching mechanism, named Triple-level Model Inferred Cooperating Searching (TMICS), for dealing with various video rain circumstances. In particular, to mitigate the problem that existing methods cannot cover various rain streaks distribution, we first design a hyper-parameter optimization model about task variable and hyper-parameter. Based on the proposed optimization model, we design a collaborative structure for video deraining. This structure includes Dominant Network Architecture (DNA) and Companionate Network Architecture (CNA) that is cooperated by introducing an Attention-based Averaging Scheme (AAS). To better explore inter-frame information from videos, we introduce a macroscopic structure searching scheme that searches from Optical Flow Module (OFM) and Temporal Grouping Module (TGM) to help restore latent frame. In addition, we apply the differentiable neural architecture searching from a compact candidate set of task-specific operations to discover desirable rain streaks removal architectures automatically. Extensive experiments on various datasets demonstrate that our model shows significant improvements in fidelity and temporal consistency over the state-of-the-art works. Source code is available at https://github.com/vis-opt-group/TMICS.

16.
IEEE Trans Pattern Anal Mach Intell ; 44(11): 7688-7704, 2022 11.
Article En | MEDLINE | ID: mdl-34582346

Conventional deformable registration methods aim at solving an optimization model carefully designed on image pairs and their computational costs are exceptionally high. In contrast, recent deep learning-based approaches can provide fast deformation estimation. These heuristic network architectures are fully data-driven and thus lack explicit geometric constraints which are indispensable to generate plausible deformations, e.g., topology-preserving. Moreover, these learning-based approaches typically pose hyper-parameter learning as a black-box problem and require considerable computational and human effort to perform many training runs. To tackle the aforementioned problems, we propose a new learning-based framework to optimize a diffeomorphic model via multi-scale propagation. Specifically, we introduce a generic optimization model to formulate diffeomorphic registration and develop a series of learnable architectures to obtain propagative updating in the coarse-to-fine feature space. Further, we propose a new bilevel self-tuned training strategy, allowing efficient search of task-specific hyper-parameters. This training strategy increases the flexibility to various types of data while reduces computational and human burdens. We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data. Extensive results demonstrate the state-of-the-art performance of the proposed method with diffeomorphic guarantee and extreme efficiency. We also apply our framework to challenging multi-modal image registration, and investigate how our registration to support the down-streaming tasks for medical image analysis including multi-modal fusion and image segmentation.


Algorithms , Magnetic Resonance Imaging , Humans , Image Processing, Computer-Assisted/methods , Neuroimaging , Tomography, X-Ray Computed
17.
IEEE Trans Image Process ; 30: 8278-8292, 2021.
Article En | MEDLINE | ID: mdl-34559653

Alternating Direction Method of Multiplier (ADMM) has been a popular algorithmic framework for separable optimization problems with linear constraints. For numerical ADMM fail to exploit the particular structure of the problem at hand nor the input data information, leveraging task-specific modules (e.g., neural networks and other data-driven architectures) to extend ADMM is a significant but challenging task. This work focuses on designing a flexible algorithmic framework to incorporate various task-specific modules (with no additional constraints) to improve the performance of ADMM in real-world applications. Specifically, we propose Guidance from Optimality (GO), a new customization strategy, to embed task-specific modules into ADMM (GO-ADMM). By introducing an optimality-based criterion to guide the propagation, GO-ADMM establishes an updating scheme agnostic to the choice of additional modules. The existing task-specific methods just plug their task-specific modules into the numerical iterations in a straightforward manner. Even with some restrictive constraints on the plug-in modules, they can only obtain some relatively weaker convergence properties for the resulted ADMM iterations. Fortunately, without any restrictions on the embedded modules, we prove the convergence of GO-ADMM regarding objective values and constraint violations, and derive the worst-case convergence rate measured by iteration complexity. Extensive experiments are conducted to verify the theoretical results and demonstrate the efficiency of GO-ADMM.

18.
IEEE Trans Neural Netw Learn Syst ; 32(6): 2430-2442, 2021 06.
Article En | MEDLINE | ID: mdl-32749966

Correlation filter (CF) has recently been widely used for visual tracking. The estimation of the search window and the filter-learning strategies is the key component of the CF trackers. Nevertheless, prevalent CF models separately address these issues in heuristic manners. The commonly used CF models directly set the estimated location in the previous frame as the search center for the current one. Moreover, these models usually rely on simple and fixed regularization for filter learning, and thus, their performance is compromised by the search window size and optimization heuristics. To break these limits, this article proposes a location-aware and regularization-adaptive CF (LRCF) for robust visual tracking. LRCF establishes a novel bilevel optimization model to address simultaneously the location-estimation and filter-training problems. We prove that our bilevel formulation can successfully obtain a globally converged CF and the corresponding object location in a collaborative manner. Moreover, based on the LRCF framework, we design two trackers named LRCF-S and LRCF-SA and a series of comparisons to prove the flexibility and effectiveness of the LRCF framework. Extensive experiments on different challenging benchmark data sets demonstrate that our LRCF trackers perform favorably against the state-of-the-art methods in practice.


Psychomotor Performance , Algorithms , Artificial Intelligence , Humans , Image Processing, Computer-Assisted , Machine Learning , Models, Neurological , Neural Networks, Computer , Pattern Recognition, Automated
19.
IEEE Trans Image Process ; 30: 1261-1274, 2021.
Article En | MEDLINE | ID: mdl-33315564

Image fusion plays a critical role in a variety of vision and learning applications. Current fusion approaches are designed to characterize source images, focusing on a certain type of fusion task while limited in a wide scenario. Moreover, other fusion strategies (i.e., weighted averaging, choose-max) cannot undertake the challenging fusion tasks, which furthermore leads to undesirable artifacts facilely emerged in their fused results. In this paper, we propose a generic image fusion method with a bilevel optimization paradigm, targeting on multi-modality image fusion tasks. Corresponding alternation optimization is conducted on certain components decoupled from source images. Via adaptive integration weight maps, we are able to get the flexible fusion strategy across multi-modality images. We successfully applied it to three types of image fusion tasks, including infrared and visible, computed tomography and magnetic resonance imaging, and magnetic resonance imaging and single-photon emission computed tomography image fusion. Results highlight the performance and versatility of our approach from both quantitative and qualitative aspects.

20.
IEEE Trans Med Imaging ; 39(12): 4150-4163, 2020 12.
Article En | MEDLINE | ID: mdl-32746155

Compressed Sensing Magnetic Resonance Imaging (CS-MRI) significantly accelerates MR acquisition at a sampling rate much lower than the Nyquist criterion. A major challenge for CS-MRI lies in solving the severely ill-posed inverse problem to reconstruct aliasing-free MR images from the sparse k -space data. Conventional methods typically optimize an energy function, producing restoration of high quality, but their iterative numerical solvers unavoidably bring extremely large time consumption. Recent deep techniques provide fast restoration by either learning direct prediction to final reconstruction or plugging learned modules into the energy optimizer. Nevertheless, these data-driven predictors cannot guarantee the reconstruction following principled constraints underlying the domain knowledge so that the reliability of their reconstruction process is questionable. In this paper, we propose a deep framework assembling principled modules for CS-MRI that fuses learning strategy with the iterative solver of a conventional reconstruction energy. This framework embeds an optimal condition checking mechanism, fostering efficient and reliable reconstruction. We also apply the framework to three practical tasks, i.e., complex-valued data reconstruction, parallel imaging and reconstruction with Rician noise. Extensive experiments on both benchmark and manufacturer-testing images demonstrate that the proposed method reliably converges to the optimal solution more efficiently and accurately than the state-of-the-art in various scenarios.


Algorithms , Magnetic Resonance Imaging , Image Processing, Computer-Assisted , Reproducibility of Results
...