Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
J Imaging Inform Med ; 2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-39147886

RESUMO

Accurate segmentation of skin lesions in dermoscopic images is of key importance for quantitative analysis of melanoma. Although existing medical image segmentation methods significantly improve skin lesion segmentation, they still have limitations in extracting local features with global information, do not handle challenging lesions well, and usually have a large number of parameters and high computational complexity. To address these issues, this paper proposes an efficient adaptive attention and convolutional fusion network for skin lesion segmentation (EAAC-Net). We designed two parallel encoders, where the efficient adaptive attention feature extraction module (EAAM) adaptively establishes global spatial dependence and global channel dependence by constructing the adjacency matrix of the directed graph and can adaptively filter out the least relevant tokens at the coarse-grained region level, thus reducing the computational complexity of the self-attention mechanism. The efficient multiscale attention-based convolution module (EMA⋅C) utilizes multiscale attention for cross-space learning of local features extracted from the convolutional layer to enhance the representation of richly detailed local features. In addition, we designed a reverse attention feature fusion module (RAFM) to enhance the effective boundary information gradually. To validate the performance of our proposed network, we compared it with other methods on ISIC 2016, ISIC 2018, and PH2 public datasets, and the experimental results show that EAAC-Net has superior segmentation performance under commonly used evaluation metrics.

2.
J Digit Imaging ; 36(4): 1851-1863, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37038040

RESUMO

Multimodal medical fusion images are important for clinical diagnosis because they can better reflect the location of disease and provide anatomically detailed information. Existing medical image fusion methods can cause significant information loss in fusion images to varying degrees. Therefore, we designed a residual transformer fusion network (RTFusion): a multimodal fusion network with significant information enhancement. We use the residual transformer to make the image information interact remotely to ensure the global information of the image and use the residual structure to enhance the feature information to prevent information loss. Then the channel attention and spatial attention module (CASAM) is added to the fusion process to enhance the significant information of the fusion image, and the feature interaction module is used to promote the interaction of specific information of the source image. Finally, the loss function of the block calculation is designed to drive the fusion network to retain rich texture details, structural information, and color information, to optimize the subjective visual effect of the image. Extensive experiments show that our method can better recover the significant information of the source image and outperform other advanced methods in subjective visual description and objective metric evaluation. In particular, the color information and texture information are balanced to enhance the visual effect of the fused image.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA