Your browser doesn't support javascript.
loading
DEAF-Net: Detail-Enhanced Attention Feature Fusion Network for Retinal Vessel Segmentation.
Cai, Pengfei; Li, Biyuan; Sun, Gaowei; Yang, Bo; Wang, Xiuwei; Lv, Chunjie; Yan, Jun.
Afiliación
  • Cai P; School of Electronic Engineering, Tianjin University of Technology and Education, Tianjin, 300222, China.
  • Li B; School of Electronic Engineering, Tianjin University of Technology and Education, Tianjin, 300222, China. lby@tute.edu.cn.
  • Sun G; Tianjin Development Zone Jingnuohanhai Data Technology Co., Ltd, Tianjin, China. lby@tute.edu.cn.
  • Yang B; School of Electronic Engineering, Tianjin University of Technology and Education, Tianjin, 300222, China.
  • Wang X; School of Electronic Engineering, Tianjin University of Technology and Education, Tianjin, 300222, China.
  • Lv C; School of Electronic Engineering, Tianjin University of Technology and Education, Tianjin, 300222, China.
  • Yan J; School of Electronic Engineering, Tianjin University of Technology and Education, Tianjin, 300222, China.
J Imaging Inform Med ; 2024 Aug 05.
Article en En | MEDLINE | ID: mdl-39103564
ABSTRACT
Retinal vessel segmentation is crucial for the diagnosis of ophthalmic and cardiovascular diseases. However, retinal vessels are densely and irregularly distributed, with many capillaries blending into the background, and exhibit low contrast. Moreover, the encoder-decoder-based network for retinal vessel segmentation suffers from irreversible loss of detailed features due to multiple encoding and decoding, leading to incorrect segmentation of the vessels. Meanwhile, the single-dimensional attention mechanisms possess limitations, neglecting the importance of multidimensional features. To solve these issues, in this paper, we propose a detail-enhanced attention feature fusion network (DEAF-Net) for retinal vessel segmentation. First, the detail-enhanced residual block (DERB) module is proposed to strengthen the capacity for detailed representation, ensuring that intricate features are efficiently maintained during the segmentation of delicate vessels. Second, the multidimensional collaborative attention encoder (MCAE) module is proposed to optimize the extraction of multidimensional information. Then, the dynamic decoder (DYD) module is introduced to preserve spatial information during the decoding process and reduce the information loss caused by upsampling operations. Finally, the proposed detail-enhanced feature fusion (DEFF) module composed of DERB, MCAE and DYD modules fuses feature maps from both encoding and decoding and achieves effective aggregation of multi-scale contextual information. The experiments conducted on the datasets of DRIVE, CHASEDB1, and STARE, achieving Sen of 0.8305, 0.8784, and 0.8654, and AUC of 0.9886, 0.9913, and 0.9911 on DRIVE, CHASEDB1, and STARE, respectively, demonstrate the performance of our proposed network, particularly in the segmentation of fine retinal vessels.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: J Imaging Inform Med Año: 2024 Tipo del documento: Article País de afiliación: China

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: J Imaging Inform Med Año: 2024 Tipo del documento: Article País de afiliación: China
...