Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 12631, 2024 Jun 02.
Artigo em Inglês | MEDLINE | ID: mdl-38824200

RESUMO

In remote sensing image fusion, the conventional Convolutional Neural Networks (CNNs) extract local features of the image through layered convolution, which is limited by the receptive field and struggles to capture global features. Transformer utilizes self-attention to capture long-distance dependencies in images, which has a global receptive field, but the computational cost for high-resolution images is excessively high. In response to the above issues, this paper draws inspiration from the FusionNet network, harnessing the local detail acquisition capability of CNNs and the global data procuring capacity of Transformer. It presents a novel method for remote sensing image sharpening named Guided Filtering-Cross Stage Partial Network-Transformer, abbreviated as GF-CSTNet. This solution unifies the strengths of Guided Filtering (GF), Cross Stage Partial Network (CSPNet), and Transformer. Firstly, this method utilizes GF to enhance the acquired remote sensing image data. The CSPNet and Transformer structures are then combined to further enhance fusion performance by leveraging their respective advantages. Subsequently, a Rep-Conv2Former method is designed to streamline attention and extract diverse receptive field features through a multi-scale convolution modulator block. Simultaneously, a reparameterization module is constructed to integrate the multiple branches generated during training into a unified branch during inference, thereby optimizing the model's inference speed. Finally, a residual learning module incorporating attention has been devised to augment the modeling and feature extraction capabilities of images. Experimental results obtained from the GaoFen-2 and WorldView-3 datasets demonstrate the effectiveness of the proposed GF-CSTNet approach. It effectively extracts detailed information from images while avoiding the problem of spectral distortion.

2.
IEEE J Biomed Health Inform ; 27(11): 5506-5517, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37656654

RESUMO

Since Magnetic Resonance Imaging (MRI) requires a long acquisition time, various methods were proposed to reduce the time, but they ignored the frequency information and non-local similarity, so that they failed to reconstruct images with a clear structure. In this article, we propose Frequency Learning via Multi-scale Fourier Transformer for MRI Reconstruction (FMTNet), which focuses on repairing the low-frequency and high-frequency information. Specifically, FMTNet is composed of a high-frequency learning branch (HFLB) and a low-frequency learning branch (LFLB). Meanwhile, we propose a Multi-scale Fourier Transformer (MFT) as the basic module to learn the non-local information. Unlike normal Transformers, MFT adopts Fourier convolution to replace self-attention to efficiently learn global information. Moreover, we further introduce a multi-scale learning and cross-scale linear fusion strategy in MFT to interact information between features of different scales and strengthen the representation of features. Compared with normal Transformers, the proposed MFT occupies fewer computing resources. Based on MFT, we design a Residual Multi-scale Fourier Transformer module as the main component of HFLB and LFLB. We conduct several experiments under different acceleration rates and different sampling patterns on different datasets, and the experiment results show that our method is superior to the previous state-of-the-art method.


Assuntos
Fontes de Energia Elétrica , Imageamento por Ressonância Magnética , Humanos
3.
IEEE Trans Cybern ; 53(7): 4594-4605, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34910656

RESUMO

Most modern satellites can provide two types of images: 1) panchromatic (PAN) image and 2) multispectral (MS) image. The former has high spatial resolution and low spectral resolution, while the latter has high spectral resolution and low spatial resolution. To obtain images with both high spectral and spatial resolution, pansharpening has emerged to fuse the spatial information of the PAN image and the spectral information of the MS image. However, most pansharpening methods fail to preserve spatial and spectral information simultaneously. In this article, we propose a framelet-based convolutional neural network (CNN) for pansharpening which makes it possible to pursue both high spectral and high spatial resolution. Our network consists of three subnetworks: 1) feature embedding net; 2) feature fusion net; and 3) framelet prediction net. Different from conventional CNN methods directly inferring high-resolution MS images, our approach learns to predict their framelet coefficients from available PAN and MS images. The introduction of multilevel feature aggregation and hybrid residual connection makes full use of spatial information of PAN image and spectral information of MS image. Quantitative and qualitative experiments at reduced- and full-resolution demonstrate that the proposed method achieves more appealing results than other state-of-the-art pansharpening methods. The source code and trained models are available at https://github.com/TingMAC/FrMLNet.


Assuntos
Aprendizagem , Redes Neurais de Computação , Software
4.
Comput Biol Med ; 141: 104774, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34785076

RESUMO

Cervical cancer is one of the leading causes of female-specific cancer death. Tumor region segmentation plays a pivotal role in both the clinical analysis and treatment planning of cervical cancer. Due to the heterogeneity and low contrast of biomedical images, current state-of-the-art tumor segmentation approaches are facing the challenge of the insensitive detection of small lesion regions. To tackle this problem, this paper proposes an augmented multiscale network (AugMS-Net) based on 3D U-Net to automatically segment cervical Magnetic Resonance Imaging (MRI) volumes. Since a multiscale strategy is considered one of the most promising algorithms to tackle small object recognition, we introduce a novel 3D module to explore more granular multiscale representations. Besides, we employ a deep multiscale supervision strategy to doubly supervise the side outputs hierarchically. To demonstrate the generalization of our model, we evaluated AugMS-Net on both a cervical dataset from MRI volumes and a liver dataset from Computerized Tomography (CT) volumes. Our proposed AugMS-Net shows superior performance over baseline models, yielding high accuracy while reducing the number of model parameters by nearly 20%. The source code and trained models are available at https://github.com/Cassieyy/AugMS-Net.


Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias do Colo do Útero , Algoritmos , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Neoplasias do Colo do Útero/diagnóstico por imagem
5.
IEEE Trans Neural Netw Learn Syst ; 32(9): 3956-3970, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32845847

RESUMO

Image denoising is a challenging inverse problem due to complex scenes and information loss. Recently, various methods have been considered to solve this problem by building a well-designed convolutional neural network (CNN) or introducing some hand-designed image priors. Different from previous works, we investigate a new framework for image denoising, which integrates edge detection, edge guidance, and image denoising into an end-to-end CNN model. To achieve this goal, we propose a multilevel edge features guided network (MLEFGN). First, we build an edge reconstruction network (Edge-Net) to directly predict clear edges from the noisy image. Then, the Edge-Net is embedded as part of the model to provide edge priors, and a dual-path network is applied to extract the image and edge features, respectively. Finally, we introduce a multilevel edge features guidance mechanism for image denoising. To the best of our knowledge, the Edge-Net is the first CNN model specially designed to reconstruct image edges from the noisy image, which shows good accuracy and robustness on natural images. Extensive experiments clearly illustrate that our MLEFGN achieves favorable performance against other methods and plenty of ablation studies demonstrate the effectiveness of our proposed Edge-Net and MLEFGN. The code is available at https://github.com/MIVRC/MLEFGN-PyTorch.

6.
Artigo em Inglês | MEDLINE | ID: mdl-32092001

RESUMO

The task of single image super-resolution (SISR) is a highly ill-posed inverse problem since reconstructing the highfrequency details from a low-resolution image is challenging. Most previous CNN-based super-resolution (SR) methods tend to directly learn the mapping from the low-resolution image to the high-resolution image through some complex convolutional neural networks. However, the method of blindly increasing the depth of the network is not the best choice because the performance improvement of such methods is marginal but the computational cost is huge. A more efficient method is to integrate the image prior knowledge into the model to assist the image reconstruction. Indeed, the soft-edge has been widely applied in many computer vision tasks as the role of an important image feature. In this paper, we propose a Soft-edge assisted Network (SeaNet) to reconstruct the high-quality SR image with the help of image soft-edge. The proposed SeaNet consists of three sub-nets: a rough image reconstruction network (RIRN), a soft-edge reconstruction network (Edge-Net), and an image refinement network (IRN). The complete reconstruction process consists of two stages. In Stage-I, the rough SR feature maps and the SR soft-edge are reconstructed by the RIRN and Edge-Net, respectively. In Stage-II, the outputs of the previous stages are fused and then feed to the IRN for high-quality SR image reconstruction. Extensive experiments show that our SeaNet converges rapidly and achieves excellent performance under the assistance of image soft-edge. The code is available at https://gitlab.com/junchenglee/seanet-pytorch.

7.
IEEE Trans Vis Comput Graph ; 26(10): 2931-2943, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-30946670

RESUMO

Image colorization refers to a computer-assisted process that adds colors to grayscale images. It is a challenging task since there is usually no one-to-one correspondence between color and local texture. In this paper, we tackle this issue by exploiting weighted nonlocal self-similarity and local consistency constraints at the resolution of superpixels. Given a grayscale target image, we first select a color source image containing similar segments to target image and extract multi-level features of each superpixel in both images after superpixel segmentation. Then a set of color candidates for each target superpixel is selected by adopting a top-down feature matching scheme with confidence assignment. Finally, we propose a variational approach to determine the most appropriate color for each target superpixel from color candidates. Experiments demonstrate the effectiveness of the proposed method and show its superiority to other state-of-the-art methods. Furthermore, our method can be easily extended to color transfer between two color images.

8.
Artigo em Inglês | MEDLINE | ID: mdl-31841409

RESUMO

In this paper, we propose a novel Retinex-based fractional-order variational model for severely low-light images. The proposed method is more flexible in controlling the regularization extent than the existing integer-order regularization methods. Specifically, we decompose directly in the image domain and perform the fractional-order gradient total variation regularization on both the reflectance component and the illumination component to get more appropriate estimated results. The merits of the proposed method are as follows: 1) small-magnitude details are maintained in the estimated reflectance. 2) illumination components are effectively removed from the estimated reflectance. 3) the estimated illumination is more likely piecewise smooth. We compare the proposed method with other closely related Retinex-based methods. Experimental results demonstrate the effectiveness of the proposed method.

9.
IEEE Trans Image Process ; 28(1): 227-239, 2019 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-30136944

RESUMO

Pansharpening is a process of acquiring a multi-spectral image with high spatial resolution by fusing a low resolution multi-spectral image with a corresponding high resolution panchromatic image. In this paper, a new pansharpening method based on the Bayesian theory is proposed. The algorithm is mainly based on three assumptions: 1) the geometric information contained in the pan-sharpened image is coincident with that contained in the panchromatic image; 2) the pan-sharpened image and the original multi-spectral image should share the same spectral information; and 3) in each pan-sharpened image channel, the neighboring pixels not around the edges are similar. We build our posterior probability model according to above-mentioned assumptions and solve it by the alternating direction method of multipliers. The experiments at reduced and full resolution show that the proposed method outperforms the other state-of-the-art pansharpening methods. Besides, we verify that the new algorithm is effective in preserving spectral and spatial information with high reliability. Further experiments also show that the proposed method can be successfully extended to hyper-spectral image fusion.

10.
IEEE Trans Image Process ; 25(4): 1516-29, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-26849863

RESUMO

Spectral unmixing aims at estimating the proportions (abundances) of pure spectrums (endmembers) in each mixed pixel of hyperspectral data. Recently, a semi-supervised approach, which takes the spectral library as prior knowledge, has been attracting much attention in unmixing. In this paper, we propose a new semi-supervised unmixing model, termed framelet-based sparse unmixing (FSU), which promotes the abundance sparsity in framelet domain and discriminates the approximation and detail components of hyperspectral data after framelet decomposition. Due to the advantages of the framelet representations, e.g., images have good sparse approximations in framelet domain, and most of the additive noises are included in the detail coefficients, the FSU model has a better antinoise capability, and accordingly leads to more desirable unmixing performance. The existence and uniqueness of the minimizer of the FSU model are then discussed, and the split Bregman algorithm and its convergence property are presented to obtain the minimal solution. Experimental results on both simulated data and real data demonstrate that the FSU model generally performs better than the compared methods.

11.
IEEE Trans Image Process ; 22(7): 2822-34, 2013 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-23613044

RESUMO

Pan-sharpening is a process of acquiring a high resolution multispectral (MS) image by combining a low resolution MS image with a corresponding high resolution panchromatic (PAN) image. In this paper, we propose a new variational pan-sharpening method based on three basic assumptions: 1) the gradient of PAN image could be a linear combination of those of the pan-sharpened image bands; 2) the upsampled low resolution MS image could be a degraded form of the pan-sharpened image; and 3) the gradient in the spectrum direction of pan-sharpened image should be approximated to those of the upsampled low resolution MS image. An energy functional, whose minimizer is related to the best pan-sharpened result, is built based on these assumptions. We discuss the existence of minimizer of our energy and describe the numerical procedure based on the split Bregman algorithm. To verify the effectiveness of our method, we qualitatively and quantitatively compare it with some state-of-the-art schemes using QuickBird and IKONOS data. Particularly, we classify the existing quantitative measures into four categories and choose two representatives in each category for more reasonable quantitative evaluation. The results demonstrate the effectiveness and stability of our method in terms of the related evaluation benchmarks. Besides, the computation efficiency comparison with other variational methods also shows that our method is remarkable.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA