Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-37028340

RESUMO

Recently, unpaired medical image enhancement is one of the important topics in medical research. Although deep learning-based methods have achieved remarkable success in medical image enhancement, such methods face the challenge of low-quality training sets and the lack of a large amount of data for paired training data. In this paper, a dual input mechanism image enhancement method based on Siamese structure (SSP-Net) is proposed, which takes into account the structure of target highlight (texture enhancement) and background balance (consistent background contrast) from unpaired low-quality and high-quality medical images. Furthermore, the proposed method introduces the mechanism of the generative adversarial network to achieve structure-preserving enhancement by jointly iterating adversarial learning. Experiments comprehensively illustrate the performance in unpaired image enhancement of the proposed SSP-Net compared with other state-of-the-art techniques.

2.
Artigo em Inglês | MEDLINE | ID: mdl-37022081

RESUMO

Heterogeneous image fusion (HIF) is an enhancement technique for highlighting the discriminative information and textural detail from heterogeneous source images. Although various deep neural network-based HIF methods have been proposed, the most widely used single data-driven manner of the convolutional neural network always fails to give a guaranteed theoretical architecture and optimal convergence for the HIF problem. In this article, a deep model-driven neural network is designed for this HIF problem, which adaptively integrates the merits of model-based techniques for interpretability and deep learning-based methods for generalizability. Unlike the general network architecture as a black box, the proposed objective function is tailored to several domain knowledge network modules to model the compact and explainable deep model-driven HIF network termed DM-fusion. The proposed deep model-driven neural network shows the feasibility and effectiveness of three parts, the specific HIF model, an iterative parameter learning scheme, and data-driven network architecture. Furthermore, the task-driven loss function strategy is proposed to achieve feature enhancement and preservation. Numerous experiments on four fusion tasks and downstream applications illustrate the advancement of DM-fusion compared with the state-of-the-art (SOTA) methods both in fusion quality and efficiency. The source code will be available soon.

3.
Artigo em Inglês | MEDLINE | ID: mdl-37796672

RESUMO

Unpaired medical image enhancement (UMIE) aims to transform a low-quality (LQ) medical image into a high-quality (HQ) one without relying on paired images for training. While most existing approaches are based on Pix2Pix/CycleGAN and are effective to some extent, they fail to explicitly use HQ information to guide the enhancement process, which can lead to undesired artifacts and structural distortions. In this article, we propose a novel UMIE approach that avoids the above limitation of existing methods by directly encoding HQ cues into the LQ enhancement process in a variational fashion and thus model the UMIE task under the joint distribution between the LQ and HQ domains. Specifically, we extract features from an HQ image and explicitly insert the features, which are expected to encode HQ cues, into the enhancement network to guide the LQ enhancement with the variational normalization module. We train the enhancement network adversarially with a discriminator to ensure the generated HQ image falls into the HQ domain. We further propose a content-aware loss to guide the enhancement process with wavelet-based pixel-level and multiencoder-based feature-level constraints. Additionally, as a key motivation for performing image enhancement is to make the enhanced images serve better for downstream tasks, we propose a bi-level learning scheme to optimize the UMIE task and downstream tasks cooperatively, helping generate HQ images both visually appealing and favorable for downstream tasks. Experiments on three medical datasets verify that our method outperforms existing techniques in terms of both enhancement quality and downstream task performance. The code and the newly collected datasets are publicly available at https://github.com/ChunmingHe/HQG-Net.

4.
IEEE Trans Cybern ; 51(2): 521-533, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31059466

RESUMO

Establishing correspondence between two given geometrical graph structures is an important problem in computer vision and pattern recognition. In this paper, we propose a robust graph matching (RGM) model to improve the effectiveness and robustness on the matching graphs with deformations, rotations, outliers, and noise. First, we embed the joint geometric transformation into the graph matching model, which performs unary matching over graph nodes and local structure matching over graph edges simultaneously. Then, the L2,1 -norm is used as the similarity metric in the presented RGM to enhance the robustness. Finally, we derive an objective function which can be solved by an effective optimization algorithm, and theoretically prove the convergence of the proposed algorithm. Extensive experiments on various graph matching tasks, such as outliers, rotations, and deformations show that the proposed RGM model achieves competitive performance compared to the existing methods.

5.
Artigo em Inglês | MEDLINE | ID: mdl-33031037

RESUMO

Recently, infrared small target detection problem has attracted substantial attention. Many works based on local low-rank model have been proven to be very successful for enhancing the discriminability during detection. However, these methods construct patches by traversing local images and ignore the correlations among different patches. Although the calculation is simplified, some texture information of the target is ignored, and targets of arbitrary forms cannot be accurately identified. In this paper, a novel target-aware method based on a non-local low-rank model and saliency filter regularization is proposed, with which the newly proposed detection framework can be tailored as a non-convex optimization problem, therein enabling joint target saliency learning in a lower dimensional discriminative manifold. More specifically, non-local patch construction is applied for the proposed target-aware low-rank model. By combining similar patches, we reconstruct them together to achieve a better generalization of non-local spatial sparsity constraints. Furthermore, to encourage target saliency learning, our proposed saliency filtering regularization term based on entropy is restricted to lie between the background and foreground. The regularization of the saliency filtering locally preserves the contexts from the target and surrounding areas and avoids the deviated approximation of the low-rank matrix. Finally, a unified optimization framework is proposed and solved with the alternative direction multiplier method (ADMM). Experimental evaluations of real infrared images demonstrate that the proposed method is more robust under different complex scenes compared with some state-of-the-art methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA