Your browser doesn't support javascript.
loading
TSCA-Net: Transformer based spatial-channel attention segmentation network for medical images.
Fu, Yinghua; Liu, Junfeng; Shi, Jun.
Affiliation
  • Fu Y; School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
  • Liu J; School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
  • Shi J; School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China. Electronic address: junshi@shu.edu.cn.
Comput Biol Med ; 170: 107938, 2024 Mar.
Article de En | MEDLINE | ID: mdl-38219644
ABSTRACT
Deep learning architectures based on convolutional neural network (CNN) and Transformer have achieved great success in medical image segmentation. Models based on the encoder-decoder framework like U-Net have been successfully employed in many realistic scenarios. However, due to the low contrast between object and background, various shapes and scales of objects, and complex background in medical images, it is difficult to locate targets and obtain better segmentation performance by extracting effective information from images. In this paper, an encoder-decoder architecture based on spatial and channel attention modules built by Transformer is proposed for medical image segmentation. Concretely, spatial and channel attention modules based on Transformer are utilized to extract spatial and channel global complementary information at different layers in U-shape network, which is beneficial to learn the detail features in different scales. To fuse better spatial and channel information from Transformer features, a spatial and channel feature fusion block is designed for the decoder. The proposed network inherits the advantages of both CNN and Transformer with the local feature representation and long-range dependency for medical images. Qualitative and quantitative experiments demonstrate that the proposed method outperforms against eight state-of-the-art segmentation methods on five publicly medical image datasets including different modalities, such as 80.23% and 93.56% Dice value, 67.13% and 88.94% Intersection over Union (IoU) value on the Multi-organ Nucleus Segmentation (MoNuSeg) and Combined Healthy Abdominal Organ Segmentation with Computed Tomography scans (CHAOS-CT) datasets.
Sujet(s)
Mots clés

Texte intégral: 1 Collection: 01-internacional Base de données: MEDLINE Sujet principal: Tomodensitométrie / 29935 Type d'étude: Qualitative_research Langue: En Journal: Comput Biol Med / Comput. biol. med / Computers in biology and medicine Année: 2024 Type de document: Article Pays d'affiliation: Chine Pays de publication: États-Unis d'Amérique

Texte intégral: 1 Collection: 01-internacional Base de données: MEDLINE Sujet principal: Tomodensitométrie / 29935 Type d'étude: Qualitative_research Langue: En Journal: Comput Biol Med / Comput. biol. med / Computers in biology and medicine Année: 2024 Type de document: Article Pays d'affiliation: Chine Pays de publication: États-Unis d'Amérique