Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Neural Netw ; 177: 106378, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38761414

RESUMO

Transformer-based image denoising methods have shown remarkable potential but suffer from high computational cost and large memory footprint due to their linear operations for capturing long-range dependencies. In this work, we aim to develop a more resource-efficient Transformer-based image denoising method that maintains high performance. To this end, we propose an Efficient Wavelet Transformer (EWT), which incorporates a Frequency-domain Conversion Pipeline (FCP) to reduce image resolution without losing critical features, and a Multi-level Feature Aggregation Module (MFAM) with a Dual-stream Feature Extraction Block (DFEB) to harness hierarchical features effectively. EWT achieves a faster processing speed by over 80% and reduces GPU memory usage by more than 60% compared to the original Transformer, while still delivering denoising performance on par with state-of-the-art methods. Extensive experiments show that EWT significantly improves the efficiency of Transformer-based image denoising, providing a more balanced approach between performance and resource consumption.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Análise de Ondaletas , Processamento de Imagem Assistida por Computador/métodos , Razão Sinal-Ruído , Humanos
2.
IEEE Trans Image Process ; 33: 4202-4214, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39008382

RESUMO

Both Convolutional Neural Networks (CNNs) and Transformers have shown great success in semantic segmentation tasks. Efforts have been made to integrate CNNs with Transformer models to capture both local and global context interactions. However, there is still room for enhancement, particularly when considering constraints on computational resources. In this paper, we introduce HAFormer, a model that combines the hierarchical features extraction ability of CNNs with the global dependency modeling capability of Transformers to tackle lightweight semantic segmentation challenges. Specifically, we design a Hierarchy-Aware Pixel-Excitation (HAPE) module for adaptive multi-scale local feature extraction. During the global perception modeling, we devise an Efficient Transformer (ET) module streamlining the quadratic calculations associated with traditional Transformers. Moreover, a correlation-weighted Fusion (cwF) module selectively merges diverse feature representations, significantly enhancing predictive accuracy. HAFormer achieves high performance with minimal computational overhead and compact model size, achieving 74.2% mIoU on Cityscapes and 71.1% mIoU on CamVid test datasets, with frame rates of 105FPS and 118FPS on a single 2080Ti GPU. The source codes are available at https://github.com/XU-GITHUB-curry/HAFormer.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA