Your browser doesn't support javascript.
loading
SegR-Net: A deep learning framework with multi-scale feature fusion for robust retinal vessel segmentation.
Ryu, Jihyoung; Rehman, Mobeen Ur; Nizami, Imran Fareed; Chong, Kil To.
Afiliação
  • Ryu J; Electronics and Telecommunications Research Institute, 176-11 Cheomdan Gwagi-ro, Buk-gu, Gwangju 61012, Republic of Korea. Electronic address: jihyoung@etri.re.kr.
  • Rehman MU; Department of Electronics and Information Engineering, Jeonbuk National University, Jeonju 54896, Republic of Korea. Electronic address: cmobeenrahman@jbnu.ac.kr.
  • Nizami IF; Department of Electrical Engineering, Bahria University, Islamabad, Pakistan. Electronic address: imnizami.buic@bahria.edu.pk.
  • Chong KT; Electronics and Telecommunications Research Institute, 176-11 Cheomdan Gwagi-ro, Buk-gu, Gwangju 61012, Republic of Korea; Advances Electronics and Information Research Center, Jeonbuk National University, Jeonju 54896, Republic of Korea. Electronic address: kitchong@jbnu.ac.kr.
Comput Biol Med ; 163: 107132, 2023 09.
Article em En | MEDLINE | ID: mdl-37343468
ABSTRACT
Retinal vessel segmentation is an important task in medical image analysis and has a variety of applications in the diagnosis and treatment of retinal diseases. In this paper, we propose SegR-Net, a deep learning framework for robust retinal vessel segmentation. SegR-Net utilizes a combination of feature extraction and embedding, deep feature magnification, feature precision and interference, and dense multiscale feature fusion to generate accurate segmentation masks. The model consists of an encoder module that extracts high-level features from the input images and a decoder module that reconstructs the segmentation masks by combining features from the encoder module. The encoder module consists of a feature extraction and embedding block that enhances by dense multiscale feature fusion, followed by a deep feature magnification block that magnifies the retinal vessels. To further improve the quality of the extracted features, we use a group of two convolutional layers after each DFM block. In the decoder module, we utilize a feature precision and interference block and a dense multiscale feature fusion block (DMFF) to combine features from the encoder module and reconstruct the segmentation mask. We also incorporate data augmentation and pre-processing techniques to improve the generalization of the trained model. Experimental results on three fundus image publicly available datasets (CHASE_DB1, STARE, and DRIVE) demonstrate that SegR-Net outperforms state-of-the-art models in terms of accuracy, sensitivity, specificity, and F1 score. The proposed framework can provide more accurate and more efficient segmentation of retinal blood vessels in comparison to the state-of-the-art techniques, which is essential for clinical decision-making and diagnosis of various eye diseases.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Aprendizado Profundo Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Aprendizado Profundo Idioma: En Ano de publicação: 2023 Tipo de documento: Article